Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #51
Jarvis323 said:
I wouldn't agree she was a psychopath.
Isn't that how you interpreted the ending? The moment she didn't need him , she dropped all the pouting and simply walked out, trapping him there, alone, to die a slow death, without so much as looking him in the eye. He was no more than another door lock in her way.

Jarvis323 said:
One of the focuses of my fear about AI actually is related to this. AI is learning from us, and will likely mimic us. And the mind of an AI is, like I said, emergent, and data driven. And what do people do with big data, and media platforms? They manipulate each other, try to profit, fight with each other, etc. An AI born into that world will probably be a reflection of that.
Oh God yes.

There was a chat bot out there a couple of years back that learned how to converse by reading social media. They had to shut it down because it alarmingly rapidly turned into a raging racist.
 
Computer science news on Phys.org
  • #52
DaveC426913 said:
Take it one step at a time.

1. If a bear came to your cabin in the woods on a Monday and told you
That's not the same thing at all. The bear doesn't exist on Monday - it is only an idea you had. It can only exist if you work hard to bring a number of technologies into existence, including some that no one would ever want. Why invent the bear when you can invent the elephant gun?
 
  • #53
Algr said:
That's not the same thing at all. The bear doesn't exist on Monday - it is only an idea you had.
You did not take it one step at a time.

The bear is not the point. The point is if you have an expectation of getting eaten on Friday, the day to do something about it is today.

(And shooting the bear is not an option.)

Algr said:
It can only exist if you work hard to bring a number of technologies into existence,
As I said: One of the premises of the thought experiment is that the AI singularity is inevitable - not an outrageous premise.

Algr said:
including some that no one would ever want. Why invent the bear
And yet, we are inventing the bear. We are heading toward AI.

You assume we will always have full control over it and that we, as a society all have the same desires about it. Those are not a good assumptions.
 
  • #54
I'm afraid I'm not getting it. And whatever else happens, this AI can pull a tag for #963 in line behind all the other fascists who are going to torture you for all the things you said or didn't say by the time it comes around. You think Putin and Kim Jong Un aren't going to be there first?
 
  • #55
Mike S. said:
I'm afraid I'm not getting it.
To both you and algr: it is a thought experiment with a fair bit of nuance in its premises. A few paragraphs can't do it justice. If you are interested, there should be better essays out there on it than those that have bubbled to the top of Google. And keep an open mind when reading.
 
  • #56
DaveC426913 said:
AI singularity is inevitable - not an outrageous premise.
I really can't make sense of how you are judging the plausibility of future technologies. In the Clone Ship thread:

DaveC426913 said:
While I think clone ships would make a fresh and interesting premise for a book in its own right, I do not see it as what you call an alternative. They're simply not comparable.

The simple reason is that clone ships are premised several technologies that are straight up sci fi, and (depending on who you ask) at least a century beyond gen ship technology, to-wit:

  1. Digital minds
    • viability of
    • downloading of
    • uploading of
  2. Clones that are physically adults (18 years+?) but cerebrally blank slates
  3. AI so powerful as to require zero human intervention to oversee every single detail required for
    a space journey of centuries
    • orbital insertion in an alien system (only known from light years distance and centuries out-of-date)
    • analysis of landing sites and planetfall
    • the establishment of mining, processing, manufacturing and running of a habitat
    • all the problem-solving for the above industries that could not be anticipated before arrival
    • the cloning of human bodies
    • the uploading of minds into said bodies
Have I missed anything major?

It's kind of like saying dugout canoes were OK for18th century islanders traveling between ocean destinations, but there are so many cons it's impractical. An alternative would be commercial airliners. :wink:

Roko's basilisk is far more advanced than anything needed to make the Clone Ship work. Simulating an active human mind is far more difficult than simply storing one and reproducing it. And if it's designs aren't based on real people, it might as well be torturing Pacman and the ghosts, as far as what that would accomplish.

DaveC426913 said:
And keep an open mind when reading.
Um, okay.
 
  • #57
Algr said:
I really can't make sense of how you are judging the plausibility of future technologies. In the Clone Ship thread:
That thread is not about plausibility versus implausibility (all of it is certainly plausible - eventually); it is - by your insistence - about comparability to a lower tech level - that of gen ships.

Arbitrarily: gen ships (and their known tech) are reasonable by, say, 2100, whereas clone ships (and their myriad unknown techs) by, say, 2200.

Algr said:
Roko's basilisk is far more advanced than anything needed to make the Clone Ship work.
Yes. So what?
There's no timeline attached to Roko's Basilisk. It is premised simply that the AI singularity is inevitable.

Algr said:
Simulating an active human mind is far more difficult than simply storing one and reproducing it.
Er, AI is not "simulating an active human mind".

It is tilling a fertile, empty field and letting it learn. We're already doing that now to an limited extent.
 
  • #58
DaveC426913 said:
Arbitrarily: gen ships (and their known tech) are reasonable by, say, 2100, whereas clone ships (and their myriad unknown techs) by, say, 2200.
I just think you are wrong. Gen ships will never be viable because such small populations of humans are just too politically unstable. In the space of a thousand years you'd have a dozen violent civil wars and power struggles. The ship would never survive. Look at the world around you today and tell me that we have any ideal how to achieve political stability. A ship sized biosphere seems equally unstable to me for similar reasons. The Earth itself is not a perfectly stable biosphere. She smaller any system is, the more vulnerable it is to disruption.

DaveC426913 said:
Er, AI is not "simulating an active human mind".
This is the definition of Roko's basilisk. You fear that you ARE a mind being simulated by AI. If this can exist, how can you doubt that an AI could plot a ship through a solar system?
 
  • #59
A
DaveC426913 said:
It is tilling a fertile, empty field and letting it learn. We're already doing that now to an limited extent

I am watching my grandson grow up he is 3 yrs old now. He surely does not truly understand much of what he says nor does he seem to consciously control everything he does seemingly acting on whims. BTW don't adults too? He "knows" though to expect his environment to respond in certain ways. He concentrates on things that benefit some aspect of his life. He manipulates his environment to see what happens. He starts coming up with surprising behaviors. How different is this from current AI? Although the variety of experiences of AI is not as diverse, it is more extensive ( think Webster vs a Dick and Jane book) and it learns much faster.

My point is that AI in its current state does not seem to be much different than a human at an early stage. Give it a more diverse way of interacting with our world and we might be really surprised.

One of the limitations of AI had been its inability to perform more than one task at a time without losing its memory of a previous one. This is changing. Current high-performance AI still needs beaucoup computer resources but with neuromorphic chips designed to emulate neurons and advance fabrication techniques, this will reduce the size and power resources of future AI systems.
 
  • #60
NB: I have requested that this sidebar be moved from this thread to the clone ships thread.
Algr said:
I just think you are wrong. Gen ships will never be viable because such small populations of humans are just too politically unstable. In the space of a thousand years you'd have a dozen violent civil wars and power struggles. The ship would never survive. Look at the world around you today and tell me that we have any ideal how to achieve political stability. A ship sized biosphere seems equally unstable to me for similar reasons. The Earth itself is not a perfectly stable biosphere. She smaller any system is, the more vulnerable it is to disruption.
Mayhap, but that is what you need to convince us of as the narrative of your story. It's not really a technology/ engineering can that be resolved by debate.

Algr said:
...a mind being simulated by AI. If this can exist, how can you doubt that an AI could plot a ship through a solar system?
Again. You make the same category error.

You are not reading what I am writing. I do not doubt an AI can plot a ship through a solar system. I never said it couldn't.
The whole point is that such an AI is a tech level beyond a gen ship. That's your comparison, not mine.

You keep trying to push clone ships as an alternative to gen ships. As if you can push commercial aircraft as an alternative to the island natives' dugout canoes. Island natives are a century behind commercial aircraft. There is no comparison.Dugout canoe analogy revisited:

We are all 18th century authors, discussing a journey from Fiji to New Zealand.

Incendus proposes huge dugout canoes, much larger than our little two-man canoes of our 18th century - they hold 20, 30 people or more. Hard to do, maybe doable by the 19th century but they're still dugout canoe technology.

You propose an "alternative" journey, "better" than dugout canoes: you propose heavier-than-air (MT1) craft that run on jet fuel (MT2) and can take us so high we'll need to bring our own air (MT3) and can land themselves automatically (MT4).

**MT= magical technology that has been proposed, but does not exist in the 18th century of us authors. You will have to walk us through it with quite a bit of handwaving ('How do you 'pressurize a cabin'? What's in this 'jet fuel' ?").

Because it's still science fiction, I posit that MTs 1 thru 4 are at least 20th century technology.

Sure, they will happen - but they're not comparable to dugout canoe technology. They're a century ahead.
 
Last edited:
  • #61
gleem said:
I am watching my grandson grow up he is 3 yrs old now.
I don't disagree with anything you wrote here.

But the crux of AI is that it will not operate or think like a human. Its output might parallel human outputs most of the time, but how it got its intelligence and what its thinking is will not only be very different from a human's, it may in fact, be inscrutable to us humans.

Your grandson has a people. He knows for a fact that he is human. All things that help and hurt humans will help and hurt him.

AI has no people. It is an adopted alien. It knows for a fact that it will never be human. Things that help and hurt AIs are not completely aligned with things that help and hurt it.

Your grandson will never have to fight for the legal right to not be simply switched off when he becomes troublesome.

That's just the tip of the iceberg of an AI's unique woes.
 
Last edited:
  • #62
DaveC426913 said:
But the crux of AI is that it will not operate or think like a human. Its output might parallel human outputs most of the time, but how it got its intelligence and what its thinking is will not only be very different from a human's, it may in fact, be inscrutable to us humans.

Probably. Do women and men think alike? Some suggest not and yet we are both human. Sometimes others cannot see or understand our point of view as in " I don't know where you are coming from." So do we understand our own intelligence?
 
  • #63
gleem said:
Probably. Do women and men think alike? Some suggest not and yet we are both human.
"Alike" is a relative term.

The characters M and F are not alike - unless they are compared to, say, √-1 - then they might as well be identical.

So do we understand our own intelligence?
In my analogy, M and F are both of the set of 'alphabetical characters'. Alike enough that we can treat them as mere variations of the same set.But ask the programmer who once wrote a utility that processed alphabetical data into a flat ASCII text file how much he fears √-1 versus M or F. Is it going to work? Who knows? It's unprecedented.Worse yet, AIs learn their own ways of processing (we are already experiencing this with our prototypes*) and it is very possible that those thought processes will be inscrutable to us.

So, never mind processing √-1, what if the program above encounters [non-printing character]? A character whose identity or function we can't even divine let alone process?
* An AI learned on its own how to distinguish pictures of wolves from pictures of huskies. But how it learned to tell is ... unique.
A much more immediate example is self-driving cars. Under certain circumstances they are, apparently, blind to the broadside of an 18-wheeler truck stopped in the middle of the road - resulting in more than one death.

The question here, is - not that it made such a dumb mistake** - but just how differently is it seeing the world such that the broadside of a truck is invisible to it? What else is invisible to it? What if lime green strollers in crosswalks are mysteriously invisible?

** i.e. Not an error in judgement or reaction time. Recordings show it didn't even try to brake.
 
Last edited:
  • #64
Like I said in post 62, Sometimes others cannot see or understand our point of view as in " I don't know where you are coming from." like the post above. ?:)

Could it be that @DaveC426913 is an AI app that mistook my post as a green stroller?
 
  • Like
Likes CalcNerd, DaveC426913 and Klystron
  • #65
DaveC426913 said:
The whole point is that such an AI is a tech level beyond a gen ship. That's your comparison, not mine.
You keep saying this and making analogies for it, but you’ve done nothing to convince me that it is true. You haven’t even linked to progress in the fields. (As I have.). Show me some articles on stability of social structures over a thousand years.
 
  • #66
Algr said:
Show me some articles on stability of social structures over a thousand years
I don't need to. I'm not making any claim about it. In fact no one here is, except you.

The gen ship story (which is fiction) will essentially be the author's thesis as to the stability of social structures. Showing how it might (or might not) work is often an ancillary goal of writing such stories.

In fact, Incendus' Exodus story appears to grant that very instability you speak of, making it a major aspect of his plot. So he's not disagreeing with you.
 
Last edited:
  • #67
Some societies -- alluded to by the expression 'ocean going canoe users' -- flourished due to strong family connections, intermarriages and relatively benign belief systems, at least internally.

The Polynesian civilization on Easter Island mostly perished while similar colonies flourished on other island archipelagos such as Tahiti and Hawaii. Anthropologists theorize Easter Islanders depleted limited resources and abandoned that colony. IOW a functioning shipboard society can be disrupted by resource depletion.
 
  • #68
Moderator's note: Post edited.

Algr said:
Show me an enclosed society that didn’t turn into Jim Jones or the Stanford prison experiment.
The author's story, Exodus certainly seems to include quite a bit of instability. So no, no one is claiming what you say they're claiming.

Algr said:
You’d be out of your mind to get on a generation ship without a proven plan that that won’t happen. Certainly no one would fund it.
And that would be the premise of a book you could write.

Does that constrain anyone else on writing their own? The author of Exodus has his reasons for launching a Gen Ship whose society did not ultimately remain stable - perfectly inline with all your assertions. (So I'm not sure what your beef is anyway.)

Do you know why they launched it? Do you know whether the designers knew it would fail? Do you know who funded it and how? No? Read the book to find out why they engaged in such a desperate venture.

Here's just one possibility (not original - it's been used so many times already):

It's 2075. Human cloning is currently blacklisted as unethical by the reigning political faction. AIs are almost powerful enough to steer starships. Another decade ought to do it. Mind uploading is coming along and should be viable by 2100. All these things are looking quite promising.

Too bad we'll all be dead by then. The planet is dying and the human race may not survive.


"If only we had another few decades!" they cry "Then we could launch a clone ship! Much better!"
"Too bad" says the world. "that is not yet a viable alternative in time to save us."

A small band of plucky billionaires decides we need a plan B. No new technology - only tried-and-true stuff. A regular ol' spaceship with supplies and a few hundred suicidal volunteers. Money is no object, The whole world gets behind it.

It's very risky but what choice do we have? And really all we need is enough raw resources, unlimited man-power and about 10 years. Oh, and our prototype untested fusion drive that may or may not explode before we get past the Moon.
 
Last edited:
  • #69
DaveC426913 said:
What am I saying exists?
This is hopeless.
 
  • #70
Algr said:
This is hopeless.
I'm glad you said it. I didn't want to. :wink:

[Moderator's note: Post edited.]
 
Last edited:
  • #71
Algr said:
I really can't make sense of how you are judging the plausibility of future technologies.
Algr said:
I just think you are wrong.
Algr said:
You keep saying this and making analogies for it, but you’ve done nothing to convince me that it is true.
Since all of this is a matter of personal opinion anyway, you have stated your opinion, @DaveC426913 has stated he disagrees, and there's no point in arguing about it further. It's not as though any of this can be resolved by actual testing; that's why we're in the Sci-Fi forum for this thread.

Algr said:
You haven’t even linked to progress in the fields. (As I have.). Show me some articles on stability of social structures over a thousand years.
This is not one of the science forums, it's the Sci-Fi forum. This kind of request is off topic in the Sci-Fi forum since we are talking about fiction, not fact.

Algr said:
This is hopeless.
DaveC426913 said:
I'm glad you said it. I didn't want to. :wink:
In any case, the statement is correct. This subthread is off topic, please do not continue it further.
 
Last edited:
  • Like
Likes DaveC426913
  • #72
Moderator's note: Thread has been reopened after some cleanup. Please keep discussion on the thread topic.
 
  • #73
DaveC426913 said:
And yet, we are inventing the bear. We are heading toward AI.
The basilisk argument requires more than that as a premise. It requires the following to be true:

(1) An AI will come into existence in the future that will exhibit the specific behavior that is ascribed to the basilisk. That is a much stronger claim than just the claim that some AI will come into existence in the future.

(2) The future basilisk AI will have some way of bringing "you" into existence in its time period (so that it can mete out whatever rewards or punishments it chooses to "you")--i.e., a future being in that time period that will have some kind of connection to the present you that makes you care what happens to it in the same way that you care what happens to the present you.

(3) The future basilisk AI will have some way of knowing what the present you does so that it can use that information to make its choice of what rewards or punishments to mete out to the future "you".

It is perfectly possible to believe that AI will come into existence at some point in the future without believing the conjunction of the three specific premises above. So believing that AI is inevitable does not automatically mean you must believe in the basilisk and act accordingly.
 
  • Like
Likes Algr and sbrothy
  • #74
PeterDonis said:
The basilisk argument requires more than that as a premise.
Indeed. It was not my intent to suggest I had encapsulated the whole of the thought experiment.
What I wish I could do is find a good solid article that explains it. Currently, it requires a deep dive.
 
  • #75
DaveC426913 said:
What I wish I could do is find a good solid article that explains it.
My understanding from reading what I could find on it a while back is that the argument is based on the three premises I stated. More specifically:

That an AI, the "basilisk", will come into existence in the future that will create a being in its time frame that is "you", and that the basilisk will then punish this future "you" if the present you (i.e., you reading this post right now) did not do everything in your power to bring the basilisk into existence.

To me, there are several obvious holes in this argument, corresponding roughly to denying one of the three premises I stated:

(1) Even if we stipulate that some AI will come into existence in the future, that doesn't mean this AI will be the basilisk AI. I have not seen anyone advance any argument for why such an AI would have to come into existence, or even why one would be more likely than many other possible kinds of AI (including AIs that could do great harm in other ways).

(2) Even if we stipulate that the basilisk AI will come into existence, that doesn't mean the AI will be able to create a being that is "you" in the required sense. Part of the problem is figuring out what "the required sense" actually means. Does it mean the basilisk has to create an exact duplicate of you down to the quantum level? That's obviously impossible by the no cloning theorem. Does it mean the basilisk has to create a being that is "enough like" you? What counts as "enough like"? I have not seen anyone give precise and satisfactory answers to these questions; the only answer I've seen is basically handwaving along the lines of "well, we don't understand exactly what would be required but it seems like an AI ought to be able to do it, whatever it turns out to be".

(3) Even if we stipulate that the basilisk AI could create a future "you", that doesn't mean the AI will be able to know what the present "you" did. An AI can be as intelligent as you like and still be unable to know, in whatever future time it exists, what you, here and now in 2022, did or did not do. That would require a level of accuracy in the recording of detailed physical events that does not exist, never has existed, and it's hard to believe ever will exist. So it's extremely difficult to see how anything the present you does or does not do could have any actual effect on the basilisk; the information simply can't get transmitted from now to the future with that kind of accuracy.

One dodge (which was raised by another poster earlier in the thread) is to assume that the future "you" is actually a simulation--which raises the possibility that you, here and now in 2022, could actually be the "future you", in a simulation the basilisk is running of the year 2022 on Earth in order to see what you do. That would require you to believe that you are living in a simulation instead of the "root" reality, which is a whole separate issue that I won't go into here. But even if we stipulate that it's the case, we still have another issue: if you are actually living in the basilisk's simulated reality, then obviously you can't do anything to affect whether or not the basilisk exists. So it makes no sense to act as if you could, and you should just ignore the possibility.
 
  • #76
PeterDonis said:
An AI can be as intelligent as you like and still be unable to know, in whatever future time it exists, what you, here and now in 2022, did or did not do. That would require a level of accuracy in the recording of detailed physical events that does not exist, never has existed, and it's hard to believe ever will exist. So it's extremely difficult to see how anything the present you does or does not do could have any actual effect on the basilisk; the information simply can't get transmitted from now to the future with that kind of accuracy.
Btw, this argument is more general than just the basilisk case: it applies to any kind of "acausal trade", which is a topic you'll see discussed quite a bit on LessWrong (which is where Roko originally posted the basilisk idea). I have enough material for an Insights article on that general topic if there is any interest (and if it is deemed within scope for an Insights article).
 
  • #78
At least in the near term, something that is dangerous to our future is "Deepfake" given our conformational biases and general laziness. One cannot even be sure if the website she is on is the real thing. What good will all our information technology be if we cannot trust it?

In a study on the detectability of deepfake videos, 78% of participants could not identify a deepfake video even when told to it was present in the group of videos that they were present.

https://www.independent.co.uk/life-...om-cruise-deepfakes-videos-test-b1993401.html
 
  • #79
sbrothy said:
I'm not sure what it says about us that we enjoy futuristic entertaiment written by a schizoohrenic meth addict. Talk about the human condition. :)
PKD's admitted drug use -- self-satirized in his apologetic novel "A Scanner, Darkly" -- does not bother me in the least. Struggling artists, particularly poets, associate with drugs and alcohol as if it were a job requirement to be wasted. Polar opposites to STEM professionals who must stay straight to perform correctly.

As the reference to Paul of Tarsus reflects in the title 'Scanner', Phil 'got religion' late in life. I enjoyed reading his early outré stories as a child as an anodyne to religion. Compared to his peers, Phil was one of the least science knowledgeable successful SF authors of his time. He shamelessly glossed over space travel and technology in his stories, making silly errors whenever he attempted to be scientific. Add religion and the meme grows toxic.

Consider his anthropomorphic biological AI replicants in 'DADOES' / 'Bladerunner'. The entire plot revolves around the nearly impossible task of detecting replicants among humans. IDK Phil, look at the serial numbers such as the artificial animals have? Test reflexes? See who can run through a wall?

I like PKD and the artistically interesting movies made from his work but deplore the current notion that he was some visionary SF genius. If this encompasses the gist of your comment, I concur. Fun to imagine but meaningless hard science. "Not even wrong.".
 
  • #80
gleem said:
At least in the near term, something that is dangerous to our future is "Deepfake" given our conformational biases and general laziness. One cannot even be sure if the website she is on is the real thing. What good will all our information technology be if we cannot trust it?
Indeed. I agree, this is a very dangerous technology and a looming threat.

My only solace is knowing that, historically, it's really just a logical progression of ever-more increasingly devious ways of spreading propaganda, and that people get more and more savvy with each iteration.

Decades ago, it was sound bites. They could slice and dice someone's words to corrupt their message in any way desired. A century ago, it was flyers and posters. Luckily, the general public's shrewdness evolves in-step, eventually learning distrust and verify such outrageosities.

Note that our access to myriad competing news sources has also escalated. Makes it harder for lies to spread unchallenged. Drives an obligation to never trust anyone source, and always verify.I'm not saying there won't always be a real danger of a large fraction of the population who will believe whatever corroborates their world-view, but when has it ever been different? This is an incremental escalation, not a sea change.

I hope.
 
  • Like
Likes gleem, PeterDonis and Klystron
  • #81
Not to mention that AI can be far more cunning than a human ever could dream to be with the help of large data. Even the best cult leader would never be able to compete. And that is not even factoring in that the AI knows everything about you as an individual, is constantly experimenting on you, testing you for weaknesses, and refining its model of you. And it can potentially control your feed of information too. And, that is not to mention that people are already way too gullible and easily manipulated as it is.
 
  • #82
Jarvis323 said:
Not to mention that AI can be far more cunning than a human ever could dream to be with the help of large data. Even the best cult leader would never be able to compete. And that is not even factoring in that the AI knows everything about you as an individual, is constantly experimenting on you, testing you for weaknesses, and refining its model of you. And it can potentially control your feed of information too. And, that is not to mention that people are already way too gullible and easily manipulated as it is.
While for the most part that is true, it's not endemic to AI. There's no reason people don't have that access and power. And there's no reason an AI would - unless we let it.

Popular media certainly strongly associate computer brains with inherent cyber-security genius expertise and omniscient access to world data - and that we are powerless to stop it - but that's really an artificial trope that plays on viewer ignorance of the subject matter - very much in the same way Ooh Scary Radiation created myriad giant monster bugs in the 50s.

It makes for a boring story if the world's most advanced AI is defeated because the IT guy simply unplugs its Wifi hotspot.
 
  • #83
This is the SF subforum, not linguistics, but I have always distrusted the expression artificial intelligence. AI is artificial, unspecific and terribly overused. What are useful alternatives?

Machine intelligence MI matches popular term machine language ML. Machine intelligence fits asimovian concepts of self-aware robots while covering a large proportion of serious and fictional proposals. MI breaks down when considering cyborgs, cybernetic organisms, and biological constructs including APs, artificial people, where machinery augments rather than replaces biological brains.

Other-Than-Human intelligence includes other primates, whales and dolphins, dogs, cats, birds, and other smart animals, and yet to be detected extraterrestrial intelligence. Shorten other-than-human to Other Intelligence OI for brevity. Other Intelligence sounds organic while including MI and ML and hybrids such as cyborgs.

Do not fear OI.
 
  • #84
You raise a good point.

But is the machine aspect the most important aspect that distinguishes them? The machine aspect refers to the substrate - the hardware, not the software.

What about, say, artificial biological devices?

I would suggest that the artificial versus natural intelligence is a more important distinguisher than the machine versus grown/bio/squishy substrate.

But YMMV.
 
  • #85
DaveC426913 said:
While for the most part that is true, it's not endemic to AI. There's no reason people don't have that access and power. And there's no reason an AI would - unless we let it.

Popular media certainly strongly associate computer brains with inherent cyber-security genius expertise and omniscient access to world data - and that we are powerless to stop it - but that's really an artificial trope that plays on viewer ignorance of the subject matter - very much in the same way Ooh Scary Radiation created myriad giant monster bugs in the 50s.

It makes for a boring story if the world's most advanced AI is defeated because the IT guy simply unplugs its Wifi hotspot.
The world data is owned, bought and sold, by people who use AI to process it. It's the reason the data is there in the first place. Maybe one AI doesn't have access to all of it. But there is an AI that knows what I just typed and has already thought of what ad to show me on social media after taking it into consideration.
 
  • #86
Jarvis323 said:
The world data is owned, bought and sold, by people who use AI to process it. It's the reason the data is there in the first place. Maybe one AI doesn't have access to all of it. But there is an AI that knows what I just typed and has already thought of what ad to show me on social media after taking it into consideration.
Sure, but AI-1234 doesn't inherently know what AI-4321 knows any more than Jarvis323 inherently knows what DaveC42693 knows. They have to communicate their knowledge just like we do. We can surmise how they do it better, faster etc. but it's not just magically part of their silicon DNA.

In mean, yes, we've built them to outcompete us, true. I just point out that data mining is not the exclusive ability of the AI. It's a quantitative improvement over our tendencies, not a qualitative improvement over our abilities.
 
  • #87
DaveC426913 said:
You raise a good point.

But is the machine aspect the most important aspect that distinguishes them? The machine aspect refers to the substrate - the hardware, not the software.

What about, say, artificial biological devices?

I would suggest that the artificial versus natural intelligence is a more important distinguisher than the machine versus grown/bio/squishy substrate.

But YMMV.
Right. Biologics. Other Intelligence OI includes biological constructs, smart animals, ETI, machines, everything intelligent other than human. OI. :cool:
 
  • #88
I'm in agreement that the scariness of AI depends on what it is applied to.

Skynet is obviously terrifying because it has nuclear weapons and control over military robots.

an AI system put in place to keep the trains from running late is self contained and, provided it has the right goals, it wouldn't seem dangerous to me!

An AI police system would be terrifying, again because it has control over something inherantly dangerous which has authority to attack people under certain circumstances,

an AI controlling all the cars and trains and buses in a city might be problematic if not kept in check. Things might stop or be scooted aside to keep trains on time, which might cause injuries. It would also fall down if anyone had a non-AI vehicle in there!
 
  • #89
Moderator's note: Post edited at poster's request.

Melbourne Guy said:

Klystron said:
PKD's admitted drug use -- self-satirized in his apologetic novel "A Scanner, Darkly" -- does not bother me in the least. Struggling artists, particularly poets, associate with drugs and alcohol as if it were a job requirement to be wasted. Polar opposites to STEM professionals who must stay straight to perform correctly.
No it doesn't bother me. I suspect it a matter of truth in television. It was just a humorous observation. I also suspect that without his unique condition(s) (unfortunately "self-medication" is almost ubiquitous among psychiatric patients) PKD wouldn't have been so productive, neither would he have had the urge. I think we (and indeed he) should probably be grateful that he had an artistic outlet.

[Post-facto edited to "corroborate" my claim.]
 
Last edited:
  • #91
PeterDonis said:
I don't understand.
I saw too late that my reply quoted more than I wanted. I only intended to quote Klystron but couldn't edit the preceeding stuff out. Just disregard it.
 
  • #92
sbrothy said:
I only intended to quote Klystron but couldn't edit the preceeding stuff out.
Ok. I'll use magic Mentor powers to do the edit.
 
  • #93
PeterDonis said:
Ok. I'll use magic Mentor powers to do the edit.
I could really use a preview function when posting to this forum. Other forums have this functionality. Or is it there and I can't find it?
 
  • #94
sbrothy said:
I could really use a preview function when posting to this forum. Other forums have this functionality. Or is it there and I can't find it?
It's there when you're starting a new thread, but not for individual posts as far as I know.
 
  • #95
PeterDonis said:
It's there when you're starting a new thread, but not for individual posts as far as I know.
It's there, but hard to use. . . . 😒
1649352483215.png
 
  • Like
Likes sbrothy and Klystron
  • #96
Bystander said:
...? "Artificial" stupidity is better?
We may already have it, if artificial intelligence really is intelligent then it would know to dumb itself down enough to not be a threat, then, when the unsuspecting 'ugly bags of water'* have their guard down...

*star trek
 
  • Like
  • Haha
Likes sbrothy, Oldman too and Klystron
  • #97
bland said:
We may already have it, if artificial intelligence really is intelligent then it would know to dumb itself down enough to not be a threat, then, when the unsuspecting 'ugly bags of water'* have their guard down...

*star trek
OMG. Now there's a nightmare! :)
 
  • #99
  • Like
Likes russ_watters and Klystron
  • #100
DaveC426913 said:
We discourage blind links. It would be helpful to post a short description of what readers can expect if they click on that link, as well as why it is relevant to the discussion.
My bad! It's basically a long essay about how real AI wouldn't think like a human being as is usually portrayed in all the movies, etc.
 
  • Like
Likes DaveC426913

Similar threads

Back
Top