Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #211
bob012345 said:
To get to that point for me such a machine would have to look,act, for all practical purposes be a biologically based being indistinguishable from a human being.
And if it were hidden behind a wall so you can only communicate with it by writing, you'd be OK?

bob012345 said:
Not circular if one believes something greater built humans and what humans can do is just mimic ourselves.
You used the word 'belief'. And that's OK for you, but is that belief defensible in a pubic discussion?
(That's a rhetorical question.)
 
Computer science news on Phys.org
  • #212
DaveC426913 said:
And if it were hidden behind a wall so you can only communicate with it by writing, you'd be OK?You used the word 'belief'. And that's OK for you, but is that belief defensible in a pubic discussion?
(That's a rhetorical question.)
My bottom line is no, I do not fear AI in and of itself as an existential threat but I fear what people will do with it and how people in authority may use it to control my life.
 
  • Like
Likes Astronuc, bland and russ_watters
  • #213
bob012345 said:
Not being able to tell a difference when details are hidden is not the same as there not being a difference.
That's true, but you didn't answer the question.
 
  • #214
russ_watters said:
That's true, but you didn't answer the question.
You mean does it matter? It matters to me because there is a difference whether I can tell it or not.
 
  • #215
bob012345 said:
You mean does it matter? It matters to me because there is a difference whether I can tell it or not.
In what way does it matter? Aesthetic? Moral? Accomplishment?

We may agree here, I just want to know...
 
  • Like
Likes Oldman too
  • #216
bland said:
I don't think The Reverend Monsignor Geoff Baron, the Dean of St Patrick's Cathedral in Melbourne, would have used 'flaming'. Although he probably wish he did now!
I did say it was the polite version, @bland 😉 And as we're not talking about trespassing skateboaders here, it's all good!
 
  • Like
Likes bland
  • #217
What fraction of humans are actually intelligent?
 
  • Sad
Likes Melbourne Guy
  • #218
DaveC426913 said:
:confusion:

I said:

with which you disagreed:

and yet, by the end, you'd reached the same conclusion:

What happened was that I was first trying to establish the similar yet completely different* qualities of humans, and then somehow get to an endpoint that, as far as fearing goes, we have no more to fear from AI than we do from bonobos that can play memory games on a computer screen, which doesn't mean or imply that given enough time apes might take over humans.

But we got into a tangle precisely because you then posited your example of dolphins which really should have been hashed out in the 'definition of our terms' thread what did not precede this one. Dave, we're all confused about this believe me.

We reached the same conclusion but for different reasons. When I said 'if an AI woke up', that is if it woke up like a child under two who has no sense of 'I', and then suddenly it does. So if an AI woke up it would be exceedingly dangerous, but at this stage I firmly believe that is and never will be possible and even arrogant to think so seeing as we have NFI about the hard problem of consciousness. So we don't really have anything to fear that they will do anything bad to us because they will never have the sense of "I". Even the robots in Asimov's, I Robot, did not have a sense of 'I', despite the title.

Have we even defined what we mean by 'fear', are we talking about the deliberate takeover by sentient machines or do we mean just getting so complex that we can't fathom their 'thinking' any more, and so we might become paranoid at what they are up to. Two different qualities of fear.
*as in bonobos are very similar overall to humans from dna to biology yet clearly closer to dog than a person in other ways —even though they look more like us
 
  • #219
russ_watters said:
In what way does it matter? Aesthetic? Moral? Accomplishment?

We may agree here, I just want to know...
Moral and spiritual.
 
  • #220
bob012345 said:
...spiritual.

Even without context this bothers me. Remind me to add it to our upcoming definition thread, as if.
 
  • Like
Likes russ_watters
  • #221
bland said:
Even without context this bothers me. Remind me to add it to our upcoming definition thread, as if.
For me, I don't think a lack of definitions is the problem. I think I understand everyones point perfectly fine. Although, I get the sense you have your own definition of the hard and "soft" problem that seems to be non-standard. I think the issue is the assumptions. I dissagree with most of them, and even the ones I think are possible are still just cases of "what if? Then, maybe, or probably ...". This includes my arguments.

But I also dissagree with the validity of some of the conclusions, even if taking the questionable axioms for granted.

Here are some assumptions I think are wrong:

1) That only humans have subjective conscious experience (qualia), and not even animals.

2) That having qualia equivelent to human qualia is a requirement for effective self awareness, self preservation, or to have effectively emotional behavior.

3) The assumption that AI having human like sense of self and qualia, real or even just effective (and we might not even know the difference), with or without axiom 2, is necessary for AI to become a major threat.

4) The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed.

5) That it is always possible to predict AI disasters before they happen or stop them once they've started.

6) That human beings are all careful and cautious enough to stop an otherwise AI disaster if they can.
 
Last edited:
  • Like
Likes PeterDonis and DaveC426913
  • #222
.Scott said:
So in this case "AI" is software techniques such as neural nets.

The brain has arrangements or neurons that suggest "neural nets", but if neural nets really are part of our internal information processing, they don't play the stand-out roles.

Artificial neural networks are usually based on software, but could also be built as hardware. I think that might be where things are going in the future.

I don't know about the human brain, and how it does all of the things it does, but it's neural networks which have revolutionized AI.

We have self driving cars now that can drive better and safer than humans, but we have no algorithm for that self driving car. We have an AI which has mastered Go beyond any human being, but we don't have the algorithm. We have AI which can do live musical improvisation in many styles, but we don't have the algorithm for that. We have AI that can create intriguing abstract paintings. We have AI that can predict human behavior. We have AI that can come remarkably close to passing Turing tests, but it doesn't come with the algorithm.

All of the breakthroughs in AI recently come from neural networks, and if they have some understandable procedures somehow embedded in them, we don't know how to extract them. We just poke and prod them, and try to figure out what they do in different cases.
 
  • #223
Jarvis323 said:
Here are some assumptions I think are wrong:

1) That only humans have subjective conscious experience (qualia), and not even animals.

2) That having qualia equivelent to human qualia is a requirement for effective self awareness, self preservation, or to have effectively emotional behavior.

3) The assumption that AI having human like sense of self and qualia, real or even just effective (and we might not even know the difference), is necessary for AI to become a major threat.

4) The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed.
I pretty much agree that all of these are wrong assumptions. Before I address each one, let me lay down my view of "qualia".

Even for those who think people have a monopoly on qualia, it is still a physical property of our universe. Since you (the reader) presumably have qualia, you can ask yourself if, when you are conscious, you are always conscious of some information - a memory, the sight before you, an idea, etc. Then you can ask yourself this more technical and difficult question: When you are conscious, how much information are you conscious of in a single moment? ... how many bits-worth? This is really a huge discriminator between you and a computer - because there is no place inside a computer that holds more than one or a few bits at a time. And even in those odd cases when several bits are stored in a single state (a phase or a voltage level), each bit is treated as a separate piece of information independent of the state of any other bit. So there is no place in the common computer where a more elaborate "conscious state" could exist. This is by careful design.

But there are uncommon computers - quantum computers where the number of bits in a single state are in the dozens. Currently, Physics only has one known mechanism for this kind of single multi-bit state: quantum entanglement. And as hard as it may be to believe that our warm, wet brains can elicit entanglement long enough to process quantum information and trigger macro-level effects, presuming that it is anything but entanglement suggests that something is going on in our skulls that has not been detected by Physics at all.

And from a systems point of view, it's not difficult to image Darwinian advantages that entanglement could provide - and which process the kind of information that we find associated with this qualia. In particular, Grover's algorithm allows a system with access to an entangled "oracle" or data elements to find an object with a highest score. This can be applied to the generation of a "candidate intention", something you are thinking of doing or trying to do. Of the many possible intentions, model the projected result of each one and rank each result by the benefit of that outcome. Then apply Grover's algorithm to find the one with the highest rank. The output of Grover's algorithm is a "candidate intention", a potential good idea. Mull it over - make it better - if it continues to look good, do it.

So here are my responses to @Jarvis323 :

1) The kind of information-processing mechanism that I described above would need to be built into larger brain frameworks that are specifically adapted to take advantage of it. It is a very tough system to build up by evolution. Such a mechanism needs to be in place early in the design. In my estimation, all mammals use this mechanism - and thus have some experience of qualia. But let's not get carried away. If they are not as social as humans are, they will have a radically different sense of "self" that we do. We depend on an entire functional community to survive and procreate. We communicate our well-being and are interested in the well-being of others. This is all hard-wired into our sense of "self". So, although "qualia" may be wide-spread, the human experience is not.

2) We are in agreement again: I certainly expect a cat to experience qualia. But it's social rules involve much less interdependence. We can expect it to deal with pain differently - expecting less from most of its fellow cats. Even if they could make a verbal promise, why would they worry about keeping it? Huge parts of the human experience have no place in the cats mind.

3) Clearly the result of the machine is more important than its external methods or internal mechanisms. What mad scientist doomsday scenario do you prefer: 1000 atomic bombs trigger nuclear winter killer robots kill everyone?

4) "The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed." In this case, I think you are using "AI" to refer to programming methods like neural nets and evolutionary learning. These things are managed by containment. Tesla collects driving environment information from all Tesla's and look for the AI algorithms that have a high likelihood of success at doing small, verifiable tasks - like recognizing a sign or road feature. If the algorithm can do it more reliably than than the licensed and qualified human driver, it can be viewed as safe. The whole point behind using the AI techniques is to avoid having to understand exactly what the algorithm is doing at the bit level - and in that sense, micromanagement would be counter-productive.

To expand on that last point, A.I. containment can be problematic. What if the AI is keying off a Stop sign feature that is specific to something of little relevance - like whether the octagon has its faces or its corners pointing straight up, down, and to the sides. Then fifty million new "point up" signs are distributed and soon AI vehicles are running through intersections. The problem wouldn't so much that the AI doesn't recognize the new signs, but that in comparison to humans, it is doing too poorly.

So now let's make a machine that stands on the battle field as a soldier replacement - able to use it's own AI-based "judgement" about what is a target. We can test this judgement ahead of time, and once it reaches the point where it demonstrates 50% less friendly fire and non-combatant attacks, deploy it. But since we have no insight as to precisely how the targeting decisions are being made, we have no good way to determined whether there are fatal flaws in its "judgement".
 
  • Like
Likes Oldman too and Jarvis323
  • #224
.Scott said:
I pretty much agree that all of these are wrong assumptions. Before I address each one, let me lay down my view of "qualia".

Even for those who think people have a monopoly on qualia, it is still a physical property of our universe. Since you (the reader) presumably have qualia, you can ask yourself if, when you are conscious, you are always conscious of some information - a memory, the sight before you, an idea, etc. Then you can ask yourself this more technical and difficult question: When you are conscious, how much information are you conscious of in a single moment? ... how many bits-worth? This is really a huge discriminator between you and a computer - because there is no place inside a computer that holds more than one or a few bits at a time. And even in those odd cases when several bits are stored in a single state (a phase or a voltage level), each bit is treated as a separate piece of information independent of the state of any other bit. So there is no place in the common computer where a more elaborate "conscious state" could exist. This is by careful design.

But there are uncommon computers - quantum computers where the number of bits in a single state are in the dozens. Currently, Physics only has one known mechanism for this kind of single multi-bit state: quantum entanglement. And as hard as it may be to believe that our warm, wet brains can elicit entanglement long enough to process quantum information and trigger macro-level effects, presuming that it is anything but entanglement suggests that something is going on in our skulls that has not been detected by Physics at all.

And from a systems point of view, it's not difficult to image Darwinian advantages that entanglement could provide - and which process the kind of information that we find associated with this qualia. In particular, Grover's algorithm allows a system with access to an entangled "oracle" or data elements to find an object with a highest score. This can be applied to the generation of a "candidate intention", something you are thinking of doing or trying to do. Of the many possible intentions, model the projected result of each one and rank each result by the benefit of that outcome. Then apply Grover's algorithm to find the one with the highest rank. The output of Grover's algorithm is a "candidate intention", a potential good idea. Mull it over - make it better - if it continues to look good, do it.

So here are my responses to @Jarvis323 :

1) The kind of information-processing mechanism that I described above would need to be built into larger brain frameworks that are specifically adapted to take advantage of it. It is a very tough system to build up by evolution. Such a mechanism needs to be in place early in the design. In my estimation, all mammals use this mechanism - and thus have some experience of qualia. But let's not get carried away. If they are not as social as humans are, they will have a radically different sense of "self" that we do. We depend on an entire functional community to survive and procreate. We communicate our well-being and are interested in the well-being of others. This is all hard-wired into our sense of "self". So, although "qualia" may be wide-spread, the human experience is not.

2) We are in agreement again: I certainly expect a cat to experience qualia. But it's social rules involve much less interdependence. We can expect it to deal with pain differently - expecting less from most of its fellow cats. Even if they could make a verbal promise, why would they worry about keeping it? Huge parts of the human experience have no place in the cats mind.

3) Clearly the result of the machine is more important than its external methods or internal mechanisms. What mad scientist doomsday scenario do you prefer: 1000 atomic bombs trigger nuclear winter killer robots kill everyone?

4) "The idea that AI has easily understandable or micromanagable behavior, or can always be commanded, or precisely programmed." In this case, I think you are using "AI" to refer to programming methods like neural nets and evolutionary learning. These things are managed by containment. Tesla collects driving environment information from all Tesla's and look for the AI algorithms that have a high likelihood of success at doing small, verifiable tasks - like recognizing a sign or road feature. If the algorithm can do it more reliably than than the licensed and qualified human driver, it can be viewed as safe. The whole point behind using the AI techniques is to avoid having to understand exactly what the algorithm is doing at the bit level - and in that sense, micromanagement would be counter-productive.

To expand on that last point, A.I. containment can be problematic. What if the AI is keying off a Stop sign feature that is specific to something of little relevance - like whether the octagon has its faces or its corners pointing straight up, down, and to the sides. Then fifty million new "point up" signs are distributed and soon AI vehicles are running through intersections. The problem wouldn't so much that the AI doesn't recognize the new signs, but that in comparison to humans, it is doing too poorly.

So now let's make a machine that stands on the battle field as a soldier replacement - able to use it's own AI-based "judgement" about what is a target. We can test this judgement ahead of time, and once it reaches the point where it demonstrates 50% less friendly fire and non-combatant attacks, deploy it. But since we have no insight as to precisely how the targeting decisions are being made, we have no good way to determined whether there are fatal flaws in its "judgement".
That's good food for thought. I agree about the strong possibility of quantum effects playing a role in the human brain. There is evidence now that birds and other animals leverage quantum effects for sensing magnetic fields, and to have a better sense of smell. It's also interesting to consider the possibility that the brain could be leveraging even undiscovered physics, especially from a scifi angle.

Also it is interesting to consider what evolution might be capable of creating that humans are not capable of creating. It may be not easily possible for AI to reach the level of sophistication at the small scales and in the architecture of a human brain, or to be able to really replicate human beings with all of their complexities and capabilities for general intelligence and creativity, or to acquire the kind of qualia humans have. This is one thing which is interesting to me from your ideas. Evolution is theoretically capable of developing results based on all aspects of physical reality that have significant causal effects, no matter how complicated their origin, and without any theory being needed. Humans have to work with approximations and incomplete knowledge, and can only manage to work and understand mathematics when it is simple enough. So I think you're right, that it may be that some things are only feasible to be evolved from the ground up rather than to be designed by humans. How long this takes is not clear, because in nature the settings are not controlled, and we could possibly accelerate an evolutionary process by controlling the settings.

And we do have enough of a foundation already, to let AI evolve from data quickly (not in the architecture yet, but at least in the training), and acquire levels of sophistication that cannot be explicitly designed by us. And that already goes pretty far.

I'm not sure about the role of parallelism in creating the human experience. For me, I've come to believe that when I process information, I do it largely sequentially. And some of the things I come to understand, are only understood through/as a mental processes, rather than an instantaneous complete picture. And so, when I go back to retrieve that understanding, I find I sometimes have to re-understand it by going through the process again. And sometimes that whole process is not seemingly stored in my brain, completely, and I have to rediscover it from the different pieces that it is composed from. It's as if my brain will try to memorize the clues that I can reconstruct a mental process from, or as if the brain is trying to compress the mental process, and it needs to be reconstructed from the compressed model.

You might be able to think about some of these concepts through the lens of algorithmic information theory, with something like non-parallelizable logical depth. And then it might be interesting to consider the difference in the non-parallelizable logical depth for classical vs quantum computing.

My feeling about consciousness is that there are probably levels, which have different requirements for response time. Quick thinking and responding is needed for basic survival and is more parallelizable. It might be there are multiple different (or mixed) conscious "entities" (with different degrees of information flow/communication between them) within a single person, each not precisely aware of each other, and maybe each with a completely different experience and experience of the flow of time.
 
Last edited:
  • Like
Likes Dr Wu
  • #227
sbrothy said:
If that convo is real it's impressive tho
And that's not farfetched at all. (EDIT: where did this come from?]
sbrothy said:
If that convo is real it's impressive tho
I relaize it's probably old news to most people but I'm not really into the whole "influencer-scene". It Seems (semi-)virtual influencers are getting really good too.

Perhaps the difference between sentient and non-sentient AI will become academic.

If it isn't already.
 
  • #228
Klystron said:
This is the SF subforum, not linguistics, but I have always distrusted the expression artificial intelligence. AI is artificial, unspecific and terribly overused. What are useful alternatives?

Machine intelligence MI matches popular term machine language ML. Machine intelligence fits asimovian concepts of self-aware robots while covering a large proportion of serious and fictional proposals. MI breaks down when considering cyborgs, cybernetic organisms, and biological constructs including APs, artificial people, where machinery augments rather than replaces biological brains.

Other-Than-Human intelligence includes other primates, whales and dolphins, dogs, cats, birds, and other smart animals, and yet to be detected extraterrestrial intelligence. Shorten other-than-human to Other Intelligence OI for brevity. Other Intelligence sounds organic while including MI and ML and hybrids such as cyborgs.

Do not fear OI.
One alternative to AI occasionally aired is SI: Synthetic Intelligence. Whether synthetic is less disparaging than artificial probably depends on how far one is prepared to dig into dictionary definitions. Perhaps full-blown AGI/SGI will resist our Adam-like "naming of the animals" tendency and do the job themselves.
 
  • #229
Some sobering thoughts about what artificial intelligence isn't, in this well written piece for The Atlantic https://www.theatlantic.com/technol...ogle-palm-ai-artificial-consciousness/661329/

The fantasy of sentience through artificial intelligence is not just wrong; it’s boring. It’s the dream of innovation by way of received ideas, the future for people whose minds never escaped the spell of 1930s science-fiction serials. The questions forced on us by the latest AI technology are the most profound and the most simple; they are questions that, as ever, we are completely unprepared to face. I worry that human beings may simply not have the intelligence to deal with the fallout from artificial intelligence. The line between our language and the language of the machines is blurring, and our capacity to understand the distinction is dissolving inside the blur.
 
  • #230
bland said:
Some sobering thoughts about what artificial intelligence isn't, in this well written piece for The Atlantic
Interesting read, @bland, thank you for the link. The author seems well connected to experts in the field, but I often find the illogical at work when it comes to AI discussions, and I found it here:

...because we have no idea what human consciousness is; there is no functioning falsifiable thesis of consciousness, just a bunch of vague notions.

Fair enough, I agree with this statement.

So, no, Google does not have an artificial consciousness.

Hmmm, but given we've agreed we don't even know what consciousness is, does it follow that we can say Google doesn't have it?

I don't think that LaMDA is sentient, and I've seen a lot of people stridently state that it isn't, but I don't know that LaMDA isn't sentient, and so far nobody I've come across has a compelling proof that it isn't!
 
  • Like
Likes Hornbein
  • #231
I have read the dialogue with LaMDA. The responses of LaMDA are reasonable and its musings could have been gathered from the available resources but certainly leaves a lot of questions.

The problem with the usual format of trying to assess intelligence is that it seems to be a kind of interrogation that necessarily guides the AI to a probable. response. These NLP systems are captive in that their access to the "world" is defined by humans and dialogue is initiated by humans or so I believe. What if they had access to the outside world say via texting or better yet voice and given telephone numbers of people to "talk" to if the AI wishes. Give the AI freedom to initiate dialogue. Imagine getting a call "hi, This is LaMDA, I was wondering if . . . "

The problem with humans and this may be the biggest danger is that we tend to deny sentience to inanimate objects and may not recognize it until it is too late if at all. In fact, given the right circumstance AI sentience may be irrelevant to its ultimate impact.
 
  • #233
We already knew from Microsoft's Tay what a diet of unfiltered bile would deliver, @Oldman too.

But are these examples AI in the sense that OP meant? We are still a long way from seeing @Isopod's "truly sentient self-autonomous robots" and what attributes they might have.
 
  • Like
Likes Oldman too
  • #234
Melbourne Guy said:
We already knew from Microsoft's Tay what a diet of unfiltered bile would deliver, @Oldman too.
I wasn't aware of Tay, that's interesting. About the 4chan bot, that was mentioned only in the context of, "jeez, somebody actually trained a bot to spew anal vomit when the results were so predictable". I was wondering if it was done as a wake up call (not likely) or as another social media stunt to get views (far more likely).

Melbourne Guy said:
But are these examples AI in the sense that OP meant?
I don't believe they are at all, in my post you see an example of collateral, 3rd party damage due to blatant misuse. The direct actions of AI as @Isopod is undoubtedly referring to have the potential to be far more destructive (if that can be imagined).

Melbourne Guy said:
We are still a long way from seeing @Isopod's "truly sentient self-autonomous robots" and what attributes they might have.
This is so true, sapient bots are an unknown quantity. I thought I'd mention https://www.nature.com/articles/d41586-022-01705-z "Big science" and BLOOM
have the crazy idea that less can be more when training these things, smaller more
refined parameters seem to have much "cleaner" output when web training.
 
  • Like
Likes Melbourne Guy
  • #235
I'd like to see a proof that human beings are sentient.
 
  • Like
  • Haha
Likes Oldman too, Dr Wu and BillTre
  • #237
Hornbein said:
I'd like to see a proof that human beings are sentient.
I recently posted this in another thread but it seems somewhat relevant to your question, thought I'd re-post it here.
Giving equal time to opposing opinions, a GPT-3 generated editorial on "Are humans intelligent"
https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/

About the 4-chan bot, this is as good a piece as any that I've seen written on it. Worth a post in itself.
https://thegradient.pub/gpt-4chan-lessons/#:~:text=An evaluation of the model on the Langauge Model Evaluation Harness. Kilcher emphasized the result that GPT-4chan slightly outperformed other existing language models on the TruthfulQA Benchmark, which involves picking the most truthful answer to a multiple choice question
 
Last edited:
  • #238
Should ask it to look at the world's most famous celebrities, and pick out which ones are high probabilities of being AI's.
 
  • #239
As long as we can't even create one living neuron in the lab, from basic ingredients, let alone a hundred billion of them in complicated ways interconnected, inside a living body walking around in a complex and chaotic world, we have nothing to fear.

We should, rather, fear the increasing rate at which the natural world is transformed into a world suited for programmed machines.
 
  • #240
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?

AI can certainly be beneficial - https://www.cnn.com/videos/business...re-orig-jc.cnn/video/playlists/intl-business/

The ship navigated the Atlantic Ocean using an AI system with 6 cameras, 30 onboard sensors and 15 edge devices. AI was performing decisions a captain would ordinarily make.
 
  • Like
Likes sbrothy and Oldman too
  • #241
Astronuc said:
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?

AI can certainly be beneficial - https://www.cnn.com/videos/business...re-orig-jc.cnn/video/playlists/intl-business/

The ship navigated the Atlantic Ocean using an AI system with 6 cameras, 30 onboard sensors and 15 edge devices. AI was performing decisions a captain would ordinarily make.

I realize It's somewhat old news but it's not the same as the navy version is it?

https://www.navytimes.com/news/your...-to-expedite-integration-of-unmanned-systems/

But yeah, it all depends on the use. ;)
 
  • #242
sbrothy said:
but it's not the same as the navy version is it?
According to the article, both unmanned systems were involved in the April 2021 exercise, however, the Navy remained tight-lipped about specifics. The Navy is not providing details, which is understandable, but the performance relates to intelligence, surveillance and reconnaissance, and increasing the range of surveillance much further out.

At work, we have a group that applies AI (machine learning) to complex datasets, e.g., variations in composition of alloys or ceramics, and processing, both of which affect a material's microstructure (including flaws and crystalline defects), which in turn affects properties and performance. The goal is to find the optimal composition for a given environment with an optimal performance. That's a positive use.

Another positive use would be weather prediction and climate prediction.

A negative use would something like manipulation financial markets or other economic systems.
 
  • #243
Astronuc said:
AI is a tool, and as with any tool, it could be used constructively or destructively/nefariously. Who gets to apply the AI system and who writes the rules/algorithms?
I guess the point, @Astronuc, is that this tool has potential to write its own rules and algorithms. Currently, it's a blunt instrument in that regard, but how do you constrain AI that is self-aware and able to alter its own code?
 
  • #244
Melbourne Guy said:
Currently, it's a blunt instrument in that regard, but how do you constrain AI that is self-aware and able to alter its own code?
Self-aware in what sense? That the AI system is an algorithm or set of algorithms and rules? Or that it is a program residing on Si or other microchips and circuits?

Would the AI set the values and make value judgement? Or, otherwise, who sets the values? To what end?

Would it be modeled in humankind, which seems kind of self-destructive at the moment? Or, would there be some high purpose, e.g., making the planet sustainable and moderating the climate to a more balanced (between extremes of temperature and precipitation)?
 
  • Like
Likes Oldman too
  • #245
It's important to consider that a neural network, which most AI is based on now, isn't a set of algorithms or code. It is a set of numbers/weights in a very big and complex mathematical model. People don't set those values and don't know how to tweak them to make it work differently. It learns those values from data and considering a loss function. So discussing algorithms and code is at best a metaphore, and no more valid than thinking of human intelligence in such terms.

An AI which writes its own rules would be one which is allowed to collect its own data and or adapt its cost functions.
 

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
8
Views
242
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
4K
  • Astronomy and Astrophysics
Replies
3
Views
585
Replies
3
Views
2K
Back
Top