Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #201
.Scott said:
AI and consciousness are not as inscrutable as you presume.
AI certainly not but consciousness (notwithstanding that this entire discussion is meaningless without adequately defined terms like 'consciousness') ergo the 'hard problem', and this hard problem is as well understood (in the sense Feynman was using it) as quantum entanglement, that is not at all.

Melbourne Guy said:
Apart from that, much of the commentary in this thread highlights that we lack shared definitions for aspects of cognition such as thinking, intelligence, self-awareness, and the like. We're almost at the level of those meandering QM interpretation discussions. Almost...
This.

Melbourne Guy said:
Who knows how dolphins or dogs really perceive the world.
We can make inferences based on behaviour. I mean, sure, dogs might be self aware and smart enough to behave like they're not. But I'm not buying that.

DaveC426913 said:
You would be wrong. Your example is a little off, the but dolphins have been shown to have some degree self-awareness.
I'd just like to point out your use of "some degree". This is a direct result of us not having defined the slippery topics we are talking about. Just like 'does god exist' threads do not define their topic but everyone plows ahead regardless. I'm guessing (hoping) you felt a little guilty about writing 'some degree' ;¬)

I'd like to see a thread the topic of which was actually seeing if the participants in this thread are able to even come to an agreement on what we terms human self awareness.

Jarvis323 said:
I just want to add that our concept of humans being absolutely self aware is probably way off. Humans aren't actually very self aware. We aren't aware of what is going on in our own brains, and we aren't very aware of our subconscious mind, and we aren't very aware of our organs their functions and can't consciously control them.
You're conflating the hard and soft problems of consciousness.
 
Computer science news on Phys.org
  • #202
bland said:
I'd just like to point out your use of "some degree". This is a direct result of us not having defined the slippery topics we are talking about.
Certainly but, in this case, that very 'Unknown' surely swerves the pathway toward the "Yes, it should be feared" camp, no?

Analogous to finding organic samples on a returning probe, we should treat it is very dangerous until any unknown threat vectors have been ruled out. Not let's assume it's OK unless there's a reason not to.

In AI, as in alien infection, it may turn out to be very difficult to put the genie back in the bottle.
 
  • #203
bland said:
I'd like to see a thread the topic of which was actually seeing if the participants in this thread are able to even come to an agreement on what we terms human self awareness.
You are welcome to start one, @bland, but if this thread is any indication, it is likely to meander about, have lots of PFers talking past, above, below, and beside each other, then peter out having reached no conclusion or consensus 😬

.Scott said:
I have no idea what "NFI" is.
Sorry, @.Scott, it might be an Australian acronym, the polite version means, No flaming idea!
 
Last edited:
  • Like
Likes russ_watters
  • #204
Melbourne Guy said:
..., it might be an Australian acronym, the polite version means, No flaming idea!
I'm in the Deep North hinterland, and unless you're still living in the era of Bluey and Curly I fear you are misleading our American friends. I don't think The Reverend Monsignor Geoff Baron, the Dean of St Patrick's Cathedral in Melbourne, would have used 'flaming'. Although he probably wish he did now!

DaveC426913 said:
Certainly but, in this case, that very 'Unknown' surely swerves the pathway toward the "Yes, it should be feared" camp, no?

I don't think so because I don't see there's any grey area. Sort of like babies around 18 months, they have all the necessary neurological equipment, they are burning in pathways in their brain but in the meantime they just appear to very intelligent animals, much like a dolphin or a crow or bonobo, until that is something happens at around two where they are suddenly aware of themselves as a separate being which is why they call it the terrible two's.

Do we even understand the transition that a baby makes when suddenly there's a 'me' and all those other idiots who aren't 'me'. I myself have eschewed breeding so I have not witnessed it firsthand but many people who have, have told me that it's very sudden.

An IA that suddenly 'woke up', would be exceedingly weird and maybe very scary.
 
  • #205
Melbourne Guy said:
Sorry, @.Scott, it might be an Australian acronym, the polite version means, No flaming idea!
See also: ISBTHOOM*

*It Sure Beats The Hell Out Of Me
 
  • Like
Likes Melbourne Guy
  • #206
:confusion:

I said:
DaveC426913 said:
... that very 'Unknown' surely swerves the pathway toward the "Yes, it should be feared" camp, no?
with which you disagreed:
bland said:
I don't think so...
and yet, by the end, you'd reached the same conclusion:
bland said:
An IA that suddenly 'woke up', would be exceedingly weird and maybe very scary.
 
  • #207
Jarvis323 said:
True, but there may be 1000 Hitler idols, maybe at a time, who get the opportunity to try and command AI armies, and eventially you would think at least one of them would lose control. Either way what is the prospect? AI causing full human extinction on its own vs AI helping a next generation Hitler to cause something close to that on purpose. And then you have the people who mean well trying to build AI armies to stop future Hitlers and they can make mistakes too.
Well, this is why I said "with or without AI". There are small groups of people, today, who have the power to destroy the world if they choose to or make a big mistake. It does not require AI nor must it be more inevitable with AI than it is without.

The idea of thousands of people/groups having access to a world-destroying technology? Yup, I do agree that makes it much more likely someone would destroy the world. With or without AI. I don't see that AI necessarily increases the risk.
 
  • #208
russ_watters said:
I prefer: if we can't tell the difference, does it even matter?
Not being able to tell a difference when details are hidden is not the same as there not being a difference. Behind one door is a live human and behind the other us a dead simulation of a human written by humans. I prefer AI be called SI, Simulated Intelligence.
 
  • #209
bob012345 said:
Not being able to tell a difference when details are hidden is not the same as there not being a difference.Behind one door is a live human and behind the other us a dead simulation of a human written by humans.
If you can't tell the difference, once you've satisfied you've tested it sufficiently, then what does it matter?

I mean, it's kind of a truism. If - as far you can determine - there's no difference, then - as far as you can determine - there's no difference.

bob012345 said:
I prefer AI be called SI, Simulated Intelligence.
How is this more than an arbitrary relabeling to no effect? It sounds a lot like a 'No True Scotsman' fallacy:

"It's not 'real' intelligence, it's only 'simulated' intelligence. After all, "real" intelligence would look like [X]."It also sounds circular. It seems to have the implicit premise that, by definition, only humans can have "real" intelligence, and any other kind is "a simulation of (human) intelligence".
 
  • Like
Likes russ_watters and BillTre
  • #210
DaveC426913 said:
If you can't tell the difference, once you've satisfied you've tested it sufficiently, then what does it matter?

I mean, it's kind of a truism. If - as far you can determine - there's no difference, then - as far as you can determine - there's no difference.
To get to that point for me such a machine would have to look,act, for all practical purposes be a biologically based being indistinguishable from a human being.
DaveC426913 said:
How is this more than an arbitrary relabeling to no effect? It sounds a lot like a 'No True Scotsman' fallacy:

"It's not 'real' intelligence, it's only 'simulated' intelligence. After all, "real" intelligence would look like [X]."It also sounds circular. It seems to have the implicit premise that, by definition, only humans can have "real" intelligence, and any other kind is "a simulation of (human) intelligence".
Not circular if one believes something greater built humans and what humans can do is just mimic ourselves.
 
  • #211
bob012345 said:
To get to that point for me such a machine would have to look,act, for all practical purposes be a biologically based being indistinguishable from a human being.
And if it were hidden behind a wall so you can only communicate with it by writing, you'd be OK?

bob012345 said:
Not circular if one believes something greater built humans and what humans can do is just mimic ourselves.
You used the word 'belief'. And that's OK for you, but is that belief defensible in a pubic discussion?
(That's a rhetorical question.)
 
  • #212
DaveC426913 said:
And if it were hidden behind a wall so you can only communicate with it by writing, you'd be OK?You used the word 'belief'. And that's OK for you, but is that belief defensible in a pubic discussion?
(That's a rhetorical question.)
My bottom line is no, I do not fear AI in and of itself as an existential threat but I fear what people will do with it and how people in authority may use it to control my life.
 
  • Like
Likes Astronuc, bland and russ_watters
  • #213
bob012345 said:
Not being able to tell a difference when details are hidden is not the same as there not being a difference.
That's true, but you didn't answer the question.
 
  • #214
russ_watters said:
That's true, but you didn't answer the question.
You mean does it matter? It matters to me because there is a difference whether I can tell it or not.
 
  • #215
bob012345 said:
You mean does it matter? It matters to me because there is a difference whether I can tell it or not.
In what way does it matter? Aesthetic? Moral? Accomplishment?

We may agree here, I just want to know...
 
  • #216
bland said:
I don't think The Reverend Monsignor Geoff Baron, the Dean of St Patrick's Cathedral in Melbourne, would have used 'flaming'. Although he probably wish he did now!
I did say it was the polite version, @bland 😉 And as we're not talking about trespassing skateboaders here, it's all good!
 
  • #217
What fraction of humans are actually intelligent?
 
  • Sad
Likes Melbourne Guy
  • #218
DaveC426913 said:
:confusion:

I said:

with which you disagreed:

and yet, by the end, you'd reached the same conclusion:

What happened was that I was first trying to establish the similar yet completely different* qualities of humans, and then somehow get to an endpoint that, as far as fearing goes, we have no more to fear from AI than we do from bonobos that can play memory games on a computer screen, which doesn't mean or imply that given enough time apes might take over humans.

But we got into a tangle precisely because you then posited your example of dolphins which really should have been hashed out in the 'definition of our terms' thread what did not precede this one. Dave, we're all confused about this believe me.

We reached the same conclusion but for different reasons. When I said 'if an AI woke up', that is if it woke up like a child under two who has no sense of 'I', and then suddenly it does. So if an AI woke up it would be exceedingly dangerous, but at this stage I firmly believe that is and never will be possible and even arrogant to think so seeing as we have NFI about the hard problem of consciousness. So we don't really have anything to fear that they will do anything bad to us because they will never have the sense of "I". Even the robots in Asimov's, I Robot, did not have a sense of 'I', despite the title.

Have we even defined what we mean by 'fear', are we talking about the deliberate takeover by sentient machines or do we mean just getting so complex that we can't fathom their 'thinking' any more, and so we might become paranoid at what they are up to. Two different qualities of fear.
*as in bonobos are very similar overall to humans from dna to biology yet clearly closer to dog than a person in other ways —even though they look more like us
 
  • #219
russ_watters said:
In what way does it matter? Aesthetic? Moral? Accomplishment?

We may agree here, I just want to know...
Moral and spiritual.
 
  • #220
bob012345 said:
...spiritual.

Even without context this bothers me. Remind me to add it to our upcoming definition thread, as if.
 
  • Like
Likes russ_watters
  • #221
bland said:
Even without context this bothers me. Remind me to add it to our upcoming definition thread, as if.
For me, I don't think a lack of definitions is the problem. I think I understand everyones point perfectly fine. Although, I get the sense you have your own definition of the hard and "soft" problem that seems to be non-standard. I think the issue is the assumptions. I dissagree with most of them, and even the ones I think are possible are still just cases of "what if? Then, maybe, or probably ...". This includes my arguments.

But I also dissagree with the validity of some of the conclusions, even if taking the questionable axioms for granted.

Here are some assumptions I think are wrong:

1) That only humans have subjective conscious experience (qualia), and not even animals.

2) That having qualia equivelent to human qualia is a requirement for effective self awareness, self preservation, or to have effectively emotional behavior.

3) The assumption that AI having human like sense of self and qualia, real or even just effective (and we might not even know the difference), with or without axiom 2, is necessary for AI to become a major threat.

4) The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed.

5) That it is always possible to predict AI disasters before they happen or stop them once they've started.

6) That human beings are all careful and cautious enough to stop an otherwise AI disaster if they can.
 
Last edited:
  • Like
Likes PeterDonis and DaveC426913
  • #222
.Scott said:
So in this case "AI" is software techniques such as neural nets.

The brain has arrangements or neurons that suggest "neural nets", but if neural nets really are part of our internal information processing, they don't play the stand-out roles.

Artificial neural networks are usually based on software, but could also be built as hardware. I think that might be where things are going in the future.

I don't know about the human brain, and how it does all of the things it does, but it's neural networks which have revolutionized AI.

We have self driving cars now that can drive better and safer than humans, but we have no algorithm for that self driving car. We have an AI which has mastered Go beyond any human being, but we don't have the algorithm. We have AI which can do live musical improvisation in many styles, but we don't have the algorithm for that. We have AI that can create intriguing abstract paintings. We have AI that can predict human behavior. We have AI that can come remarkably close to passing Turing tests, but it doesn't come with the algorithm.

All of the breakthroughs in AI recently come from neural networks, and if they have some understandable procedures somehow embedded in them, we don't know how to extract them. We just poke and prod them, and try to figure out what they do in different cases.
 
  • #223
Jarvis323 said:
Here are some assumptions I think are wrong:

1) That only humans have subjective conscious experience (qualia), and not even animals.

2) That having qualia equivelent to human qualia is a requirement for effective self awareness, self preservation, or to have effectively emotional behavior.

3) The assumption that AI having human like sense of self and qualia, real or even just effective (and we might not even know the difference), is necessary for AI to become a major threat.

4) The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed.
I pretty much agree that all of these are wrong assumptions. Before I address each one, let me lay down my view of "qualia".

Even for those who think people have a monopoly on qualia, it is still a physical property of our universe. Since you (the reader) presumably have qualia, you can ask yourself if, when you are conscious, you are always conscious of some information - a memory, the sight before you, an idea, etc. Then you can ask yourself this more technical and difficult question: When you are conscious, how much information are you conscious of in a single moment? ... how many bits-worth? This is really a huge discriminator between you and a computer - because there is no place inside a computer that holds more than one or a few bits at a time. And even in those odd cases when several bits are stored in a single state (a phase or a voltage level), each bit is treated as a separate piece of information independent of the state of any other bit. So there is no place in the common computer where a more elaborate "conscious state" could exist. This is by careful design.

But there are uncommon computers - quantum computers where the number of bits in a single state are in the dozens. Currently, Physics only has one known mechanism for this kind of single multi-bit state: quantum entanglement. And as hard as it may be to believe that our warm, wet brains can elicit entanglement long enough to process quantum information and trigger macro-level effects, presuming that it is anything but entanglement suggests that something is going on in our skulls that has not been detected by Physics at all.

And from a systems point of view, it's not difficult to image Darwinian advantages that entanglement could provide - and which process the kind of information that we find associated with this qualia. In particular, Grover's algorithm allows a system with access to an entangled "oracle" or data elements to find an object with a highest score. This can be applied to the generation of a "candidate intention", something you are thinking of doing or trying to do. Of the many possible intentions, model the projected result of each one and rank each result by the benefit of that outcome. Then apply Grover's algorithm to find the one with the highest rank. The output of Grover's algorithm is a "candidate intention", a potential good idea. Mull it over - make it better - if it continues to look good, do it.

So here are my responses to @Jarvis323 :

1) The kind of information-processing mechanism that I described above would need to be built into larger brain frameworks that are specifically adapted to take advantage of it. It is a very tough system to build up by evolution. Such a mechanism needs to be in place early in the design. In my estimation, all mammals use this mechanism - and thus have some experience of qualia. But let's not get carried away. If they are not as social as humans are, they will have a radically different sense of "self" that we do. We depend on an entire functional community to survive and procreate. We communicate our well-being and are interested in the well-being of others. This is all hard-wired into our sense of "self". So, although "qualia" may be wide-spread, the human experience is not.

2) We are in agreement again: I certainly expect a cat to experience qualia. But it's social rules involve much less interdependence. We can expect it to deal with pain differently - expecting less from most of its fellow cats. Even if they could make a verbal promise, why would they worry about keeping it? Huge parts of the human experience have no place in the cats mind.

3) Clearly the result of the machine is more important than its external methods or internal mechanisms. What mad scientist doomsday scenario do you prefer: 1000 atomic bombs trigger nuclear winter killer robots kill everyone?

4) "The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed." In this case, I think you are using "AI" to refer to programming methods like neural nets and evolutionary learning. These things are managed by containment. Tesla collects driving environment information from all Tesla's and look for the AI algorithms that have a high likelihood of success at doing small, verifiable tasks - like recognizing a sign or road feature. If the algorithm can do it more reliably than than the licensed and qualified human driver, it can be viewed as safe. The whole point behind using the AI techniques is to avoid having to understand exactly what the algorithm is doing at the bit level - and in that sense, micromanagement would be counter-productive.

To expand on that last point, A.I. containment can be problematic. What if the AI is keying off a Stop sign feature that is specific to something of little relevance - like whether the octagon has its faces or its corners pointing straight up, down, and to the sides. Then fifty million new "point up" signs are distributed and soon AI vehicles are running through intersections. The problem wouldn't so much that the AI doesn't recognize the new signs, but that in comparison to humans, it is doing too poorly.

So now let's make a machine that stands on the battle field as a soldier replacement - able to use it's own AI-based "judgement" about what is a target. We can test this judgement ahead of time, and once it reaches the point where it demonstrates 50% less friendly fire and non-combatant attacks, deploy it. But since we have no insight as to precisely how the targeting decisions are being made, we have no good way to determined whether there are fatal flaws in its "judgement".
 
  • Like
Likes Oldman too and Jarvis323
  • #224
.Scott said:
I pretty much agree that all of these are wrong assumptions. Before I address each one, let me lay down my view of "qualia".

Even for those who think people have a monopoly on qualia, it is still a physical property of our universe. Since you (the reader) presumably have qualia, you can ask yourself if, when you are conscious, you are always conscious of some information - a memory, the sight before you, an idea, etc. Then you can ask yourself this more technical and difficult question: When you are conscious, how much information are you conscious of in a single moment? ... how many bits-worth? This is really a huge discriminator between you and a computer - because there is no place inside a computer that holds more than one or a few bits at a time. And even in those odd cases when several bits are stored in a single state (a phase or a voltage level), each bit is treated as a separate piece of information independent of the state of any other bit. So there is no place in the common computer where a more elaborate "conscious state" could exist. This is by careful design.

But there are uncommon computers - quantum computers where the number of bits in a single state are in the dozens. Currently, Physics only has one known mechanism for this kind of single multi-bit state: quantum entanglement. And as hard as it may be to believe that our warm, wet brains can elicit entanglement long enough to process quantum information and trigger macro-level effects, presuming that it is anything but entanglement suggests that something is going on in our skulls that has not been detected by Physics at all.

And from a systems point of view, it's not difficult to image Darwinian advantages that entanglement could provide - and which process the kind of information that we find associated with this qualia. In particular, Grover's algorithm allows a system with access to an entangled "oracle" or data elements to find an object with a highest score. This can be applied to the generation of a "candidate intention", something you are thinking of doing or trying to do. Of the many possible intentions, model the projected result of each one and rank each result by the benefit of that outcome. Then apply Grover's algorithm to find the one with the highest rank. The output of Grover's algorithm is a "candidate intention", a potential good idea. Mull it over - make it better - if it continues to look good, do it.

So here are my responses to @Jarvis323 :

1) The kind of information-processing mechanism that I described above would need to be built into larger brain frameworks that are specifically adapted to take advantage of it. It is a very tough system to build up by evolution. Such a mechanism needs to be in place early in the design. In my estimation, all mammals use this mechanism - and thus have some experience of qualia. But let's not get carried away. If they are not as social as humans are, they will have a radically different sense of "self" that we do. We depend on an entire functional community to survive and procreate. We communicate our well-being and are interested in the well-being of others. This is all hard-wired into our sense of "self". So, although "qualia" may be wide-spread, the human experience is not.

2) We are in agreement again: I certainly expect a cat to experience qualia. But it's social rules involve much less interdependence. We can expect it to deal with pain differently - expecting less from most of its fellow cats. Even if they could make a verbal promise, why would they worry about keeping it? Huge parts of the human experience have no place in the cats mind.

3) Clearly the result of the machine is more important than its external methods or internal mechanisms. What mad scientist doomsday scenario do you prefer: 1000 atomic bombs trigger nuclear winter killer robots kill everyone?

4) "The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed." In this case, I think you are using "AI" to refer to programming methods like neural nets and evolutionary learning. These things are managed by containment. Tesla collects driving environment information from all Tesla's and look for the AI algorithms that have a high likelihood of success at doing small, verifiable tasks - like recognizing a sign or road feature. If the algorithm can do it more reliably than than the licensed and qualified human driver, it can be viewed as safe. The whole point behind using the AI techniques is to avoid having to understand exactly what the algorithm is doing at the bit level - and in that sense, micromanagement would be counter-productive.

To expand on that last point, A.I. containment can be problematic. What if the AI is keying off a Stop sign feature that is specific to something of little relevance - like whether the octagon has its faces or its corners pointing straight up, down, and to the sides. Then fifty million new "point up" signs are distributed and soon AI vehicles are running through intersections. The problem wouldn't so much that the AI doesn't recognize the new signs, but that in comparison to humans, it is doing too poorly.

So now let's make a machine that stands on the battle field as a soldier replacement - able to use it's own AI-based "judgement" about what is a target. We can test this judgement ahead of time, and once it reaches the point where it demonstrates 50% less friendly fire and non-combatant attacks, deploy it. But since we have no insight as to precisely how the targeting decisions are being made, we have no good way to determined whether there are fatal flaws in its "judgement".
That's good food for thought. I agree about the strong possibility of quantum effects playing a role in the human brain. There is evidence now that birds and other animals leverage quantum effects for sensing magnetic fields, and to have a better sense of smell. It's also interesting to consider the possibility that the brain could be leveraging even undiscovered physics, especially from a scifi angle.

Also it is interesting to consider what evolution might be capable of creating that humans are not capable of creating. It may be not easily possible for AI to reach the level of sophistication at the small scales and in the architecture of a human brain, or to be able to really replicate human beings with all of their complexities and capabilities for general intelligence and creativity, or to acquire the kind of qualia humans have. This is one thing which is interesting to me from your ideas. Evolution is theoretically capable of developing results based on all aspects of physical reality that have significant causal effects, no matter how complicated their origin, and without any theory being needed. Humans have to work with approximations and incomplete knowledge, and can only manage to work and understand mathematics when it is simple enough. So I think you're right, that it may be that some things are only feasible to be evolved from the ground up rather than to be designed by humans. How long this takes is not clear, because in nature the settings are not controlled, and we could possibly accelerate an evolutionary process by controlling the settings.

And we do have enough of a foundation already, to let AI evolve from data quickly (not in the architecture yet, but at least in the training), and acquire levels of sophistication that cannot be explicitly designed by us. And that already goes pretty far.

I'm not sure about the role of parallelism in creating the human experience. For me, I've come to believe that when I process information, I do it largely sequentially. And some of the things I come to understand, are only understood through/as a mental processes, rather than an instantaneous complete picture. And so, when I go back to retrieve that understanding, I find I sometimes have to re-understand it by going through the process again. And sometimes that whole process is not seemingly stored in my brain, completely, and I have to rediscover it from the different pieces that it is composed from. It's as if my brain will try to memorize the clues that I can reconstruct a mental process from, or as if the brain is trying to compress the mental process, and it needs to be reconstructed from the compressed model.

You might be able to think about some of these concepts through the lens of algorithmic information theory, with something like non-parallelizable logical depth. And then it might be interesting to consider the difference in the non-parallelizable logical depth for classical vs quantum computing.

My feeling about consciousness is that there are probably levels, which have different requirements for response time. Quick thinking and responding is needed for basic survival and is more parallelizable. It might be there are multiple different (or mixed) conscious "entities" (with different degrees of information flow/communication between them) within a single person, each not precisely aware of each other, and maybe each with a completely different experience and experience of the flow of time.
 
Last edited:
  • #227
sbrothy said:
If that convo is real it's impressive tho
And that's not farfetched at all. (EDIT: where did this come from?]
sbrothy said:
If that convo is real it's impressive tho
I relaize it's probably old news to most people but I'm not really into the whole "influencer-scene". It Seems (semi-)virtual influencers are getting really good too.

Perhaps the difference between sentient and non-sentient AI will become academic.

If it isn't already.
 
  • #228
Klystron said:
This is the SF subforum, not linguistics, but I have always distrusted the expression artificial intelligence. AI is artificial, unspecific and terribly overused. What are useful alternatives?

Machine intelligence MI matches popular term machine language ML. Machine intelligence fits asimovian concepts of self-aware robots while covering a large proportion of serious and fictional proposals. MI breaks down when considering cyborgs, cybernetic organisms, and biological constructs including APs, artificial people, where machinery augments rather than replaces biological brains.

Other-Than-Human intelligence includes other primates, whales and dolphins, dogs, cats, birds, and other smart animals, and yet to be detected extraterrestrial intelligence. Shorten other-than-human to Other Intelligence OI for brevity. Other Intelligence sounds organic while including MI and ML and hybrids such as cyborgs.

Do not fear OI.
One alternative to AI occasionally aired is SI: Synthetic Intelligence. Whether synthetic is less disparaging than artificial probably depends on how far one is prepared to dig into dictionary definitions. Perhaps full-blown AGI/SGI will resist our Adam-like "naming of the animals" tendency and do the job themselves.
 
  • #229
Some sobering thoughts about what artificial intelligence isn't, in this well written piece for The Atlantic https://www.theatlantic.com/technol...ogle-palm-ai-artificial-consciousness/661329/

The fantasy of sentience through artificial intelligence is not just wrong; it’s boring. It’s the dream of innovation by way of received ideas, the future for people whose minds never escaped the spell of 1930s science-fiction serials. The questions forced on us by the latest AI technology are the most profound and the most simple; they are questions that, as ever, we are completely unprepared to face. I worry that human beings may simply not have the intelligence to deal with the fallout from artificial intelligence. The line between our language and the language of the machines is blurring, and our capacity to understand the distinction is dissolving inside the blur.
 
  • #230
bland said:
Some sobering thoughts about what artificial intelligence isn't, in this well written piece for The Atlantic
Interesting read, @bland, thank you for the link. The author seems well connected to experts in the field, but I often find the illogical at work when it comes to AI discussions, and I found it here:

...because we have no idea what human consciousness is; there is no functioning falsifiable thesis of consciousness, just a bunch of vague notions.

Fair enough, I agree with this statement.

So, no, Google does not have an artificial consciousness.

Hmmm, but given we've agreed we don't even know what consciousness is, does it follow that we can say Google doesn't have it?

I don't think that LaMDA is sentient, and I've seen a lot of people stridently state that it isn't, but I don't know that LaMDA isn't sentient, and so far nobody I've come across has a compelling proof that it isn't!
 
  • #231
I have read the dialogue with LaMDA. The responses of LaMDA are reasonable and its musings could have been gathered from the available resources but certainly leaves a lot of questions.

The problem with the usual format of trying to assess intelligence is that it seems to be a kind of interrogation that necessarily guides the AI to a probable. response. These NLP systems are captive in that their access to the "world" is defined by humans and dialogue is initiated by humans or so I believe. What if they had access to the outside world say via texting or better yet voice and given telephone numbers of people to "talk" to if the AI wishes. Give the AI freedom to initiate dialogue. Imagine getting a call "hi, This is LaMDA, I was wondering if . . . "

The problem with humans and this may be the biggest danger is that we tend to deny sentience to inanimate objects and may not recognize it until it is too late if at all. In fact, given the right circumstance AI sentience may be irrelevant to its ultimate impact.
 
  • #233
We already knew from Microsoft's Tay what a diet of unfiltered bile would deliver, @Oldman too.

But are these examples AI in the sense that OP meant? We are still a long way from seeing @Isopod's "truly sentient self-autonomous robots" and what attributes they might have.
 
  • #234
Melbourne Guy said:
We already knew from Microsoft's Tay what a diet of unfiltered bile would deliver, @Oldman too.
I wasn't aware of Tay, that's interesting. About the 4chan bot, that was mentioned only in the context of, "jeez, somebody actually trained a bot to spew anal vomit when the results were so predictable". I was wondering if it was done as a wake up call (not likely) or as another social media stunt to get views (far more likely).

Melbourne Guy said:
But are these examples AI in the sense that OP meant?
I don't believe they are at all, in my post you see an example of collateral, 3rd party damage due to blatant misuse. The direct actions of AI as @Isopod is undoubtedly referring to have the potential to be far more destructive (if that can be imagined).

Melbourne Guy said:
We are still a long way from seeing @Isopod's "truly sentient self-autonomous robots" and what attributes they might have.
This is so true, sapient bots are an unknown quantity. I thought I'd mention https://www.nature.com/articles/d41586-022-01705-z "Big science" and BLOOM
have the crazy idea that less can be more when training these things, smaller more
refined parameters seem to have much "cleaner" output when web training.
 
  • Like
Likes Melbourne Guy
  • #235
I'd like to see a proof that human beings are sentient.
 
  • Like
  • Haha
Likes Oldman too, Dr Wu and BillTre
  • #236
  • #237
Hornbein said:
I'd like to see a proof that human beings are sentient.
I recently posted this in another thread but it seems somewhat relevant to your question, thought I'd re-post it here.
Giving equal time to opposing opinions, a GPT-3 generated editorial on "Are humans intelligent"
https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/

About the 4-chan bot, this is as good a piece as any that I've seen written on it. Worth a post in itself.
https://thegradient.pub/gpt-4chan-lessons/#:~:text=An evaluation of the model on the Langauge Model Evaluation Harness. Kilcher emphasized the result that GPT-4chan slightly outperformed other existing language models on the TruthfulQA Benchmark, which involves picking the most truthful answer to a multiple choice question
 
Last edited:
  • #238
Should ask it to look at the world's most famous celebrities, and pick out which ones are high probabilities of being AI's.
 
  • #239
As long as we can't even create one living neuron in the lab, from basic ingredients, let alone a hundred billion of them in complicated ways interconnected, inside a living body walking around in a complex and chaotic world, we have nothing to fear.

We should, rather, fear the increasing rate at which the natural world is transformed into a world suited for programmed machines.
 
  • #240
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?

AI can certainly be beneficial - https://www.cnn.com/videos/business...re-orig-jc.cnn/video/playlists/intl-business/

The ship navigated the Atlantic Ocean using an AI system with 6 cameras, 30 onboard sensors and 15 edge devices. AI was performing decisions a captain would ordinarily make.
 
  • Like
Likes sbrothy and Oldman too
  • #241
Astronuc said:
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?

AI can certainly be beneficial - https://www.cnn.com/videos/business...re-orig-jc.cnn/video/playlists/intl-business/

The ship navigated the Atlantic Ocean using an AI system with 6 cameras, 30 onboard sensors and 15 edge devices. AI was performing decisions a captain would ordinarily make.

I realize It's somewhat old news but it's not the same as the navy version is it?

https://www.navytimes.com/news/your...-to-expedite-integration-of-unmanned-systems/

But yeah, it all depends on the use. ;)
 
  • #242
sbrothy said:
but it's not the same as the navy version is it?
According to the article, both unmanned systems were involved in the April 2021 exercise, however, the Navy remained tight-lipped about specifics. The Navy is not providing details, which is understandable, but the performance relates to intelligence, surveillance and reconnaissance, and increasing the range of surveillance much further out.

At work, we have a group that applies AI (machine learning) to complex datasets, e.g., variations in composition of alloys or ceramics, and processing, both of which affect a material's microstructure (including flaws and crystalline defects), which in turn affects properties and performance. The goal is to find the optimal composition for a given environment with an optimal performance. That's a positive use.

Another positive use would be weather prediction and climate prediction.

A negative use would something like manipulation financial markets or other economic systems.
 
  • #243
Astronuc said:
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?
I guess the point, @Astronuc, is that this tool has potential to write its own rules and algorithms. Currently, it's a blunt instrument in that regard, but how do you constrain AI that is self-aware and able to alter its own code?
 
  • #244
Melbourne Guy said:
Currently, it's a blunt instrument in that regard, but how do you constrain AI that is self-aware and able to alter its own code?
Self-aware in what sense? That the AI system is an algorithm or set of algorithms and rules? Or that it is a program residing on Si or other microchips and circuits?

Would the AI set the values and make value judgement? Or, otherwise, who sets the values? To what end?

Would it be modeled in humankind, which seems kind of self-destructive at the moment? Or, would there be some high purpose, e.g., making the planet sustainable and moderating the climate to a more balanced (between extremes of temperature and precipitation)?
 
  • #245
It's important to consider that a neural network, which most AI is based on now, isn't a set of algorithms or code. It is a set of numbers/weights in a very big and complex mathematical model. People don't set those values and don't know how to tweak them to make it work differently. It learns those values from data and considering a loss function. So discussing algorithms and code is at best a metaphore, and no more valid than thinking of human intelligence in such terms.

An AI which writes its own rules would be one which is allowed to collect its own data and or adapt its cost functions.
 
  • #246
Astronuc said:
Self-aware in what sense?
That's essentially the crux of the concern. We can't control each other's behaviour, so if an AI reaches that level of autonomy, and is inimical to the human way of life, it might decide on some nefarious course of action to kill us off.

We don't know, of course, if an AI could even reach this dangerous point (and the AI we've built to date are laughably limited in that regard) but it is possible. As for what 'model' it adopts in terms of ethics or higher purpose, that is equally unknown.

Some say, AI has potential to go horribly wrong for us. The question is whether we should fear this or not.
 
  • Like
Likes russ_watters
  • #247
Another thing - are we talking AI at the level of a 4th or 8th grader, or that of one with a PhD or ScD? Quite a difference.
 
  • #248
Astronuc said:
Another thing - are we talking AI at the level of a 4th or 8th grader, or that of one with a PhD or ScD? Quite a difference.
As far as I know, AI is currently very good at, and either is or will probably soon excede humans (in a technical sense) in, language skills, music, art, and understanding and synthesis of images. In these areas, it is easy to make AI advance further just by throwing more and better data and massive amounts of compute time into its learning/training.

I am not aware of an ability for AI to do independent fundamental research in mathematics, or that type of thing. But that is something we shouldn't be surprised to see fairly soon IMO. I think this because AI advances at a high rate, and now we are seeing leaps in natural language, which I think is a stepping stone to mathematics. And google has an AI now that can compete at an average level in codeing competitions.

Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.

AI is also very, very advanced in understanding human behavior/psychology. Making neural networks able to understand human behavior, and training them how to manipulate us, is basically, by far, the biggest effort going in the AI game. This is one of the biggest threats currently IMO.
 
Last edited:
  • #249
Jarvis323 said:
Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.
Maybe. But what happens if the algorithm becomes corrupted, or a chip or microcircuit fails? Will it self-correct?

Jarvis323 said:
if it has access to the world,
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?
 
  • Like
Likes russ_watters
  • #250
Astronuc said:
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?

I guess there is pretty much no limitation. We have to guess where people will draw the line. If there is a line that once crossed we can no longer turn back from and will lead to our destruction, it will be hard to recognize. We could be like the lobster in a pot of water that slowly increases in temperature.
 

Similar threads

Back
Top