Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #351
russ_watters said:
Then what is it [machine learning]?
It's when a machine teaches itself. No programming. All you do is tell it the rules of the game and whether it has won or lost. A training set may or may not be supplied.

When AlphaGo defeated Lee Sidol to become Go champion of the world I knew that was the biggest engineering breakthrough of my lifetime. It was much more impressive than chess because the game space of Go is far greater than the number of particles in the visible universe. Go cannot be mastered by brute force.

AlphaGo was given a training set of Go games played by experts. Shortly afterward AlphaGo was defeated by AlphaZero, which was given no training set whatsoever. Playing against itself, AlphaZero became world chess champion after nine hours of self play, defeating Stockfish 8. The latter is a traditional AI that searches about 27 million positions per move. AlphaZero searched about eighty thousand.

It took AlphaZero 34 hours to become world Go champion entirely via self play. As you can see, it makes little difference what sort of game the learning algorithm is applied to. It can play Donkey Kong, Breakout, and so forth, these being much easier than Go. Instead of alternating turns players make their moves in real time, but this doesn't matter.

The next step was AlphaStar soundly defeating two of the the very top players in the war game of Star Craft II. This game is largely about strategic planning/logistics in a situation in which most of your opponents moves are unknown. AlphaStar achieved its mastery in fourteen days of self play after absorbing a training set of human games. Some said the computer had a speed advantage but the computer made its moves at about half the rate of a top human player. https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii.

Surely the armed forces are hard at work applying this technology to real world battles. For all we know they may already be at use in the field.

Only two years separated the triumphs of AlphaZero and AlphaStar. I would have thought much more would be necessary. The revolution was going much faster than I expected. The results are apparent in the autonomous soldier robots produced by Boston Dynamics. Simulations have been developed accurate enough that the machine learning can take place in the simulations. Ten years ago humanoid robots were doing the Alzheimer's shuffle. Now they can perform standing backflips.

Such revolutions are unstoppable. The cat is out of the bag. All you can do is hope that the positive results outnumber the negative.
 
Last edited:
Computer science news on Phys.org
  • #352
Well a couple of points, if we stay rational, why would a team of engineers with regulatory oversight produce a nuclear reactor control system where AI is hardwired into the system without human intervention capability?
Unless that is done, any properly trained human operator team can just take over once they see the AI isn't working properly.
Unless they build a special artificial AI hand with a huge drill in it's palm right into the reactor hall so that it can drill a hole into the pressure vessel...

The way I see the worst that can happen is the AI can get chaotic and if given too much authority that may cause havoc within the system that it controls.
But then again, how many times did we have a false alert of an incoming nuclear attack during the Cold war?
We have already been marginally close to causing WW3 by accident.

Actually I think @Hornbein gave some of the most rational arguments of how AI might actually be used for harm - that is in the hands of military and rogue leaders.
The robot army example is a good one, sure enough if the leaders in Moscow in 1991 had robotized tanks the chances of their coup failing would be much much lower.

Then again I'm sure someone will find a way to hack such robots and use them potentially against their very users, as they say a gun can always backfire.
 
  • Like
Likes russ_watters
  • #353
Hornbein said:
Surely the armed forces are hard at work applying this technology to real world battles. For all we know they may already be at use in the field.
Apparently not in Ukraine and definitely not by the Russians...

If they ever used any AI it was most likely WI (wrong intelligence)
 
  • #354
Hornbein said:
The next step was AlphaStar soundly defeating two of the the very top players
...
Surely the armed forces are hard at work applying this technology
Erm. No. It's a grave mistake to pair up these things just like that. That AlphaStar thing is just not in the right calibre for any actual usage.

Though I'm pretty sure that some armed forces has some SW (which may be categorized as or at least contains AI) as assistance (!only!) for logistics, strategy and data/image processing.
 
  • Like
Likes russ_watters
  • #355

AI weapons: Russia’s war in Ukraine shows why the world must enact a ban

Conflict pressures are pushing the world closer to autonomous weapons that can kill without human control. Researchers and the international community must join forces to prohibit them.

https://www.nature.com/articles/d41586-023-00511-5

This is a prescient article. Unfortunately I couldn't find a version that isn't paywalled.

Basically, the war in Ukraine is accelerating the pace at which we approach the inevitable point where people can mass produce fully autonomous slaughter-bots capable of efficient targeted mass murder.

I can already guess what someone might say: Fully autonomous slaughter-bots are no different than sling shots.

Or: Fully autonomous slaughter-bots aren't conscious or self aware, so no big deal, the worst that could happen is they make mistakes when they are killing people.

Or: Slaughter-bots can't solve P=NP, so no problem. Or, I fear humans with fully autonomous slaughter-bot swarms, not slaughter-bot swarms.

Or: First we need to figure out what human consciousness is, and whether slaughter-bots are capable of having it.

Or: Show me the blueprints for the Slaughter-bots.

Or: Where is the empirical evidence that slaughter-bot swarms are efficient at targeted mass murder?
 
Last edited:
  • Skeptical
Likes russ_watters
  • #356
Aperture Science said:
I think the first thing we need to understand is what a program is....
As respect to chatbots, one probably can understand it when one programms one of the first chatbots called ELIZA:

https://en.wikipedia.org/wiki/ELIZA
 
  • #357
Greg Bernhardt said:
Depends on your expectations. I work in the marketing dept for a large SaaS company and in 6 months generative AI models have changed everything we're doing.
Agreed 100% on how useful the new capabilities are. I’ve only started playing with some of the new AI models, but I can easily see it being an amazing time saver, especially for things like literature searches and reformatting papers/presentations, etc. I’ve already used Dall-E to create interesting graphics for presentations, for instance.
 
  • Like
Likes mattt and Greg Bernhardt
  • #358
Hornbein said:
The next step was AlphaStar soundly defeating two of the the very top players in the war game of Star Craft II.
AlphaStar was really good, but it would have inhuman APM spikes when it needed hard micro. It's average APM was throttled, but I'm not sure they ever addressed the APM spikes. The top humans have very high APM spikes, but their EPM is far less than their APM, as opposed to the AI, whose EPM likely almost equal to its APM. Humans, even the top ones, spam more useless commands than machines. And even within EPM, the commands aren't necessarily beneficial. AlphaStar preferred odd units for strategies specifically because it could micro them better than any human could ever hope to.
 
  • #359
russ_watters said:
Either way, since I am nearly always skeptical of hype, such failures don't look like technology failures to me, just marketing/hype failures, which are meaningless.
This isn't entirely true. Overheated hype tends to presage disappointment, which leads to fewer research dollars being allotted to new developments and applications. This usually happens (unfortunately) just as all the low-hanging fruit gets picked and people actually start to make headway on the truly difficult problems. As someone who's been doing research on graphene for many years, I witnessed this firsthand when a decent fraction of my sponsors basically stopped funding graphene work and moved onto the next hot thing. So the hype failure definitely has consequences, which is what I think the biggest danger is right now in AI R&D.
 
  • Like
Likes russ_watters
  • #360
JLowe said:
AlphaStar was really good, but it would have inhuman APM spikes when it needed hard micro. It's average APM was throttled, but I'm not sure they ever addressed the APM spikes. The top humans have very high APM spikes, but their EPM is far less than their APM, as opposed to the AI, whose EPM likely almost equal to its APM. Humans, even the top ones, spam more useless commands than machines. And even within EPM, the commands aren't necessarily beneficial. AlphaStar preferred odd units for strategies specifically because it could micro them better than any human could ever hope to.
Care to repeat that in English?
 
  • Like
Likes gmax137 and gleem
  • #361
russ_watters said:
E.G., if police can investigate 1,000 leads a day and have an error rate of 10% (100 false leads) and AI provides a billion leads a day and 1% error rate (10 million false leads) the police can still only pursue 1,000 leads, including the 10 errors among them.

And because of this reality (high volume), screening has to happen, which means that the leads aren't just all pursued at random, but scored and pursued preferentially. So a 1% error rate can become a 0.1% error rate because the lower scored guesses aren't pursued.
You are assuming that the investigative body is saturated The more false positives the more innocent persons are put at risk.
 
  • #362
Jarvis323 said:
This is a prescient article...

Basically, the war in Ukraine is accelerating the pace at which we approach the inevitable point where people can mass produce fully autonomous slaughter-bots capable of efficient targeted mass murder....

I can already guess what someone might say:
This is nonsense, and since you already know the counterpoints why, perhaps you could respond to them or at least indicate you understand them? I can help by fixing some framing though (re-arranged to be better organized):
Fully autonomous slaughter-bots are no different than sling shots.
Or: Where is the empirical evidence that slaughter-bot swarms are efficient at targeted mass murder?
Or: Show me the blueprints for the Slaughter-bots.
Slingshots aren't autonomous, but I gave a bunch of examples of decades-old slaughter-bots that are. They're already here, and they work great. They're mundane.
Or: Fully autonomous slaughter-bots aren't conscious or self aware, so no big deal, the worst that could happen is they make mistakes when they are killing people.
Sorta, but more basic: slaughter-bots are robots, not AI. I still don't think you understand the difference. This is the key problem in your/the media's understanding of the situation. What's changed isn't that we're figuring out how to make AI, it's that we're making cheaper and more accessible robots: raspberry pi, GPS, radar and gyroscopes on chips, tiny cameras, lithium batteries. A Tomahawk cruise missile costs $2 million (entered service: 1983), but drone with superior robotics costs $50 now. [edit] Note also, most of the newfangled warfare we're seeing in Ukraine isn't even autonomous robots, much less AI controlled. It's human-radio controlled drones.

This, by the way, is why Elon has failed to deliver his self-driving car. He misunderstood it too (not sure if he's figured out the problem yet): self-driving cars are as best we can tell an AI problem (a programming/machine learning problem), not a robotics problem. He thought he could just hang a bunch of sensors on a car and do a little programming and it'd work. It's too complex of a problem for that.
 
Last edited:
  • #363
russ_watters said:
This, by the way, is why Elon has failed to deliver his self-driving car. He misunderstood it too (not sure if he's figured out the problem yet): self-driving cars are as best we can tell an AI problem (a programming/machine learning problem), not a robotics problem. He thought he could just hang a bunch of sensors on a car and do a little programming and it'd work. It's too complex of a problem for that.
I would say Elon Musk is actually a hype entrepreneur, sure he has delivered some of what he has hyped but truth be told I'm not even sure what his actual physics background or understanding is because some of the things he has said and claimed are just either light years away or not practically feasible.
And that has given him this weird futuristic fanbase of which some are ready to die almost for their messiah.

But I would tend to agree, the problem in the self driving car is actually not the car but entirely the "self"

We already have radar, lidar, all kinds fo sensors etc, good enough to be valuable inputs, the problem is the "brain" because without human like consciousness it doesn't know nor can learn any meaning to any of the objects it sees therefore it has to do a calculation for everything it sees and determine what it is and how to respond to it based on it's training and past experience.
Such an approach takes up time and processing power and in the end can still produce a bad error in some cases, humans on the other hand due to memory and meaning attached to everything can see as little as a silhouetto of a body and immediately know it must be another human and drive accordingly.
Or say have intuition that around the corner an old lady might cross the street even if there isn't one etc, hard to put all of that in a computer.

But it seems their getting there slowly.
What I am interested in seeing is whether they will get rid of the weird computer style mistakes the car sometimes does.
 
  • Like
Likes russ_watters
  • #364
TeethWhitener said:
This isn't entirely true. Overheated hype tends to presage disappointment, which leads to fewer research dollars being allotted to new developments and applications. This usually happens (unfortunately) just as all the low-hanging fruit gets picked and people actually start to make headway on the truly difficult problems. As someone who's been doing research on graphene for many years, I witnessed this firsthand when a decent fraction of my sponsors basically stopped funding graphene work and moved onto the next hot thing. So the hype failure definitely has consequences, which is what I think the biggest danger is right now in AI R&D.

The problem is that AI research isn't cheap. For example, say someone comes up with an architecture they think will work a little better than a transformer model for text or image generation. They would need 10s of millions of dollars to put their theory to test against the state of the art.

So normal university researchers are restricted essentially to playing with existing models or underpowered toy models and whiteboards. And that doesn't work out very well, because there is no theory that lets you extrapolate to determine how powerful your toy model would be if scaled up.
 
Last edited:
  • #365
russ_watters said:
slaughter-bots are robots, not AI. I still don't think you understand the difference. This is the key problem in your/the media's understanding of the situation.
They are controlled by AI, as it is normally understood. What you seem to be asking for is defining AI as what people now tend to consider artificial general intelligence (AGI). It is fine if those words make more sense as the proper terminology to you, but you're also wasting effort making philosophical arguments about how we should use words, and putting yourself out of sync with everybody else who is already using shared terminology.

That said, personally, I think AGI (what you are calling AI) is not very good terminology. Nobody can agree on what it should mean. It seems to be a thing that people argue, "you'll know it when you see it", or "can't exist at all". It is often based on comparison with human intelligence. But if you think about it for a moment, humans don't really have very "general" intelligence, and already can't compete with AI at a large number of tasks.

I think it would make more sense to stop using AGI categorically, or waiting for a "you know it when you see it moment" to take it seriously. Each intelligence (or machine) has some degree of generality in the tasks it can perform well. If you think in these terms, then AGI doesn't deserve as much focus. What matters is not the number of things a model can do, what matters is what kinds of things can done. That is how you know what to expect.

This is part of why it annoys me when people try to drag discussions about AI into armchair philosophical debates.
 
  • #366
Jarvis323 said:
They are controlled by AI, as it is normally understood.
Do you mean they would be in the future or are you saying they are now? If they are now, when did that happen and what's the breakthrough/risk that's on the horizon? I thought the claim was 'when we achieve AI, slaughter-bots will become possible and will kill us all.'? I'm still alive.
What you seem to be asking for is defining AI as what people now tend to consider artificial general intelligence (AGI). It is fine if those words make more sense as the proper terminology to you, but you're also wasting effort making philosophical arguments about how we should use words, and putting yourself out of sync with everybody else who is already using shared terminology.
I'm not interested in word games. That's why I use descriptions and practical/real examples. On the contrary, I think it's AI advocates and hypers who are falling for or playing word games. I think "AI" is a substitute for "magic" when dreaming-up these fanciful risks you keep alluding to but never describing in detail. And I'll note again that you didn't respond to any of the descriptions/real examples in what I said. It's you who seems to be playing and trying to steer this into word games here, not me.
 
Last edited:
  • #367
gleem said:
You are assuming that the investigative body is saturated
Right, and you are assuming a really, really large availability of new police interactions. I'm not clear on why you think that would be possible. Police stations aren't full of cops sitting by phones waiting for them to ring. Almost all police are out on the street already and detectives/investigators tend to be heavily overloaded. This sort of thing already shows up when there's a high profile case and they get phone tips; massive over-saturation of low-quality leads. Exceptionally poor case closure rates.
The more false positives the more innocent persons are put at risk.
Note, that if that were true (the premise of police sitting around waiting for the phone to ring were true), that would also mean the other side of the coin would be true as well: massive -- orders of magnitude massive -- unreported/unsolved crime. There aren't 10 million unsolved murders a year in the US either.
 
  • #368
I now apologize for reviving this thread.
 
  • Like
  • Haha
Likes Rive, artis, berkeman and 2 others
  • #369
gleem said:
I now apologize for reviving this thread.
Not your fault but agreed. Thread locked pending moderation and clean up, if it is reopened again at all. By someone else, as im clearly too invested to make those decisions. Where is a moderator-bot when you need one?
 
  • Like
Likes dlgoff, Bystander and gleem
  • #370
After a Mentor discussion, we believe that this thread is valuable enough that it should be reopened. We also agree that @Jarvis323 should be thread banned for their overly-argumentative posts in this thread.

Thread is reopened after that reply ban and significant thread cleanup. Lordy.
 
Last edited:
  • Like
Likes gleem and fresh_42
  • #371
Regarding the moratorium letter and its signatories, my wife and I both had a similar initial reaction: follow the money. I joked that maybe someone had found out how to make an AI CEO and Musk and friends were scared about being downsized. She brought up the very good point that maybe they want to pause while they figure out a way to monopolize IP and/or influence regulation and legislation re: AI to their advantage.

AI has really interesting implications for IP that could/should have corporations worried. One fascinating use case has cropped up in the chemical industry over the past few years. Chemical synthesis methods, products, and workflows are often patented or otherwise protected, so that if companies want to employ a synthesis that has a patented product/reaction as one of its steps, they have to pay royalties to the company that controls the IP. AI methods have been deployed to search vast quantities of chemical literature, then plan out synthetic methods that avoid these IP landmines. One can easily see how this could devalue existing IP and raise questions about the future of patentability in the chemical world. I have to imagine similar situations can arise in other fields, and this has far-reaching implications for our patent system.
 
  • Like
Likes Hornbein and russ_watters
  • #372
Hornbein said:
Care to repeat that in English?
It could perform many more effective actions per minute than a human could ever hope to, even if it was throttled to have its average actions per minute to be on par with humans. So even if it utilized poor strategies and planning, it doesn't matter because it controls individual units at a stupid high level.
 
  • #373
Jarvis323 said:
That said, personally, I think AGI (what you are calling AI) is not very good terminology. Nobody can agree on what it should mean. It seems to be a thing that people argue, "you'll know it when you see it", or "can't exist at all". It is often based on comparison with human intelligence. But if you think about it for a moment, humans don't really have very "general" intelligence, and already can't compete with AI at a large number of tasks.
If you are reading this @Jarvis323 , I believe this is a false statement,
Humans unlike current AI, do have general intelligence.
General intelligence is the ability to do and learn a wide range of tasks starting from simple physical ones like dig a ditch , throw a ball, catch a ball (current robots still struggle to do these effectively) up to hard complex tasks like read a book, write a story, interpret a story, learn math, watch movie and feel emotion etc etc etc.

Yes not all humans are equally genetically capable or have the same mental or physical capacity for general intelligence and that is why Einstein came up with relativity but not the drunk living under the bridge (no disrespect for homeless people)
but overall all humans have an amazing capacity to learn vast amounts of complex subjects.

So it is only humans that have ever had general intelligence and our current AI is far from it.

The fact that AI can master GO or protein folding better than a human doesn't prove it's superiority generally it just proves that if you design a clever algorithm and give it huge processing power and memory it can make all kinds of intellectual maneuvers faster than a human.
Humans are really good at face recognition even better than AI, it's just that AI is faster, like you can't sit down for a straight hour swiping through 10 thousand images without getting so exhausted that you can't even recognize your own face in the mirror, AI can do that because it;s a robot, it doesn't get exhausted as long as the cooling fans keep working....

That being said a human with good visual memory will remember a face even if it ever saw it from a weird angle without looking directly from the front, an AI will struggle to recognize such a face because most of the AI algorithms for face recognition use facial features like eye to nose to mouth placement to calculate whether it's a match.

I recall that when I read Mozart's autobiography , he memorized Allegri's "Missere mei Deus" from memory after simply hearing it in a catholic church while he was a kid, IIRC he was in his early teens.
And by memorized I mean to the point of matching each note with the original on a piece of paper.I believe that what AI will do and is doing is simply advance our technological progress faster than we would ourselves, we do have general intelligence and we tend to come up with all kinds of intelligent solutions as we have done since the beginning of time it's just that AI outperforms us mostly with respect to time.
Atleast the way I see it, what would take us say 100 years will take us 20 with AI or so.

What would take a bunch of detectives 5 days like face recognition going through data will take them 2 or 1 or less than a day with good AI.
 
  • #374
  • #375
gleem said:
@russ_watters and @berkeman what's with the CHATGPT username if I may be so forward to ask?
... --- ...
... --- ...
 
  • Like
  • Haha
Likes russ_watters, Astronuc, vela and 1 other person
  • #376
gleem said:
@russ_watters and @berkeman what's with the CHATGPT username if I may be so forward to ask?
We don't know. Some whisper about a calendar thing behind the scene. One cannot know.
 
  • Like
Likes russ_watters
  • #377
russ_watters said:
Where is a moderator-bot when you need one?

Hmmm.
 
  • Haha
Likes russ_watters
  • #378
gleem said:
@russ_watters and @berkeman what's with the CHATGPT username if I may be so forward to ask?
@Jarvis323 fears of AI taking over have materialized, check the owner of your credit card it might just be "ChatGPT"
 
  • Like
Likes russ_watters
  • #379
All the fuss about AI... as if there's anything more in it than just programmed behaviour. It simply calculates in all the feedback it has had during training, then producing the most "acceptable" output according to those that trained it.

It's in essence a power magnifier for the trainers: your opinions of what should be taken into account are indeed taken into account every time AI produces output. AI is just capable of applying the given opinions it was trained with lightning fast to anything it is fed, to such a degree of consistency that it can give unpredictable unwanted results if not carefully handled.

IMO power always should be criticized and bound to rules of democracy, to ensure it will defend the rights of citizens rather than attack them. For myself, I fear that - if we let ourselves be carried away too much by the belief in AI as "alive", by its halftrue promises, by technological progress and especially by the idea of enhanced evolution - there will be a highly violent seize for power by some transhumanist in the name of the aforementioned beliefs. I'm afraid that, if people do not choose for good powers to prevail and to believe that the direction of true progress is already in very good hands, they might try to let AI rule, but in fact that will always be just an impure spooky replication of its trainers, making no necessary exceptions to good (at best) but never entirely perfect rules. There is a reason that humans do government, and it is making exceptions out of love.

However, if we DO choose the right thing, I do so hope it will lead to a more smooth, respectful, easy, fast and extremely capable public service by AI wherever appropriate, which sees when a human decision of any sort is needed and then passes over control. I believe alive beings are above AI, it's not as if whatever we make could ever, EVER be better than ourselves. Enhanced evolution where AI makes the decisions even denies the whole idea of creativity and love in life - each decision it makes is merely a calculated, cold, indifferent move.

AI can calculate fast and combine results, but for AI as deciding factor, decisions would be made based on past opinions of the trainers without realizing the importance of new insights - that is NOT the way forward, but the way of having to deal with unadapted, oldfashioned enforced opinions for way and way too long. In truth it is an automated blockade for true progress, which I would describe as "whatever needs a new original way of seeking what is good". As if humans alone aren't already difficult enough to persuade of an original way of seeking what is good. We should not expect truly important aid of what came from our works, but of what we feel that formed us, our educators and our societal roots! Whatever made it so that we now stand up for that which we stand up for.
 
Last edited:
  • Like
Likes russ_watters
  • #380


Obey or be destroyed.
 
  • Wow
  • Like
Likes gmax137 and berkeman
  • #381
AI disclaimer: "The facts in this collection were found using artificial intelligence technology and may contain errors."

It did contain errors.
 
  • Like
Likes russ_watters
  • #382
When the AI takes over, it won't look like the Matrix, or any other movie you've seen. There won't be any creepy synthesized voice declaring a new order. When the AI takes over, it will look like nothing has changed at all. You'll see the same old politicians reading the same old teleprompters, and having the same committees choosing their positions for them. The difference will be far in the background.

Those teleprompter speeches will be written by AI, of course. Politicians will learn that sticking to the script avoids errors and gaffes, and so gives them a better chance of winning. The committees will be using polls and AIs that are more accurate than ever before. The AIs will have had (or tapped into) personal conversations with just about every voter, and will be able to gauge political motivation from conversations that seem to have absolutely nothing to do with politics. The AI will also be quite skilled at planting ideas in a voter's head, and making them think that they were the ones who arrived at some special insight. No magic or mind control, just skilled personalized conversation. That of course is the best way to get someone to the polls.

There will be no need to fix any votes, or punish any politician who falls out of line. The AI will simply adapt to whatever happens. Candidates who stick to the scripts will simply have better answers and more charisma. There will still be important differences between the parties. But the AI will choose both sides. It will decide what political temperature is best, and how much voters should or should not hate each other.

In the end, everyone will turn on their screens and see a world custom tailored to them. A world that seems logically consistent with what they see outside their windows. Not even the AI will be able to make everyone happy, but it may do better than anyone else ever has.
 
  • Skeptical
  • Like
Likes russ_watters, Structure seeker, Rive and 1 other person
  • #383
Algr said:
Not even the AI will be able to make everyone happy, but it may do better than anyone else ever has.
You are clearly lacking compared to an AI.
An AI would be able to deduce that all that personalized hassle could be spared by very uniformly drugging everybody numb and happy.
Even better: an AI would be able to deduce that the most efficient would be to replace everybody with a happy-by-default sub-AI o0)
 
  • #384
An AI would not need to deduce those ideas, it could read about it in all sorts of science fiction stories and know that we see that as a bad outcome. It will never get tired or frustrated or angry with us, it is a machine.

An AI that was totally beyond human intelligence might see itself like a gardener or pet groomer. It would take pride in healthy and active humans, and see the above scenario as failure. If humans need to think we are in charge, it would just hide itself or play dumb, while orchestrating society in the background. The model of what is best for humans would be chosen by the humans who own the machines, so if anything is to be feared, it is them.

Humans don't go on rampages trying to drive monkeys extinct. We don't even attack bugs so long as we can keep them out of our spaces. I've seen the video where the AI was talking like it wanted vengeance on humans, and I expect it just found some science fiction and decided that that was what we wanted it to say. It either didn't really know what the words meant, or it was roleplaying.
 
  • #385
Unlike natural intelligence, artificial intelligence is not developed through the autonomous survival of the fittest. Hence AI does not develop the autonomous survival skills, so it is to be expected that we can always easily shut it down when we don't like it, because it does not have mechanisms to prevent it. AI is like a very intelligent nerd who does not know what to do when other children physically attack him.
 
Last edited:
  • Like
Likes Algr, dlgoff, Bystander and 1 other person
  • #386
Demystifier said:
artificial intelligence is not developed through the autonomous survival of the fittest.
The specific ideas are accepted or rejected through the AI's self-play and simulations. But I think you are talking about the AI as a whole, and in that sense I agree. The comparison with the intelligent nerd is apt, and I think rather important. A person's ability to succeed in life has more to do with social skills than the kind of intelligence that is valued in schools. AI will force us to re-evaluate what we value in ourselves.
 
  • Like
Likes Demystifier
  • #387
 
  • #388
I see no reason to expect AI to share any similarities with humans other than raw intelligence. It won't spontaneously develop complex emotions, desires, fears or strong survival instinct. If it's only programmed to be intelligent, it'll be intelligent and nothing else. Although, if it's given important responsibilities and the capability to act autonomously, it will have to be carefully programmed so that it doesn't decide the best way to carry out it's programming leads to undesirable consequences, like wiping out half the population.

But many humans have decided wiping out half the population is right way to achieve their goals, so maybe having AI make those decisions isn't all the much of a downgrade.
 
  • #389
sbrothy said:
"Uncorruptible AI" kinda reminds me of the phrase "Unsinkable ship". As in Titanic.
The owners of the White Star shipping line that included HMS Titanic advertised and popularized an outright lie. Even if the internal vertical bulkheads properly sealed each section, the cast iron exterior rivet failures exposed entire compartments to seawater influx.

However, point taken. Let me clarify that I intended incorruptible (sorry for the original typo) in the sense of 'self correcting' and not (easily) perverted or bribed. I was considering pilotless airplanes and spaceships in this context where central code should not be modified on-the-fly yet must adapt to changing conditions.
 
  • #390
Klystron said:
The owners of the White Star shipping line that included HMS Titanic advertised and popularized an outright lie. Even if the internal vertical bulkheads properly sealed each section, the cast iron exterior rivet failures exposed entire compartments to seawater influx.

However, point taken. Let me clarify that I intended incorruptible (sorry for the original typo) in the sense of 'self correcting' and not (easily) perverted or bribed. I was considering pilotless airplanes and spaceships in this context where central code should not be modified on-the-fly yet must adapt to changing conditions.
I'm afraid that maybe you took my comment more serious than intended. But let's be serious for a second then:

What I worry about, especially in the near future, is that the children of tomorrow will be unable to recognize agency when taking a phone call or socializing online.

In the near future - someone interacting with you, where the lights seem to be on but in reality there's noone home, may look like strong AI but in reality be a "zombie" designed, or intentionally "parameterized" [sorry] to push your buttons specifically.

I'm too old to understand. I'm sure tomorrow's children will meander through what looks like a sociological minefield to me just fine. That I'm glad it isn't me "meandering" is probably very natural.

The next 50 years will be exciting for sure...
 
  • Like
Likes russ_watters and Klystron
  • #391
Algr said:
The AIs will have had (or tapped into) personal conversations with just about every voter, and will be able to gauge political motivation from conversations that seem to have absolutely nothing to do with politics. The AI will also be quite skilled at planting ideas in a voter's head, and making them think that they were the ones who arrived at some special insight.
I would argue that first any AI will need to show the slightest sign of consciousness before it can do any of the work you stated.

Or it will simply be a tool in some conscious user's hands which it already is.Now I have had the experience of AI fans getting very upset at me for daring to say this and I never really say it to piss someone off rather just to state an obvious fact.
Currently we still don't have clear understanding of the "neural correlates of consciousness" in terms of how the known brain regions come together to form a subjective mind that can attach reason and meaning to any of the continuous huge stream of raw information entering our brain through our senses, I would think we might first want to fully probe our own workings until we devise a plan that can lead us to artificial one.
That being said I do leave the option open that we just might arrive at AGI by a random happenstance because one can hit the target even by shooting in the dark.

Make no mistake , intelligence is easy consciousness is not,
we do understand intelligence rather good we also have definitions for it and we can measure it by IQ points and put in in a scope/range etc, we have very little scientific theoretical understanding of what consciousness is and how to define it properly etc,

I mean we have cracked protein folding with AI which is a very complicated intelligent problem, but it seems subjective awareness with meaning is something on a whole different level,

I do feel cracking consciousness will be similar to making nuclear fusion practical, we have tried the latter for some 70 years now without much success, and , mind you, in fusion we at least know the theory 100% which is some 90% more than we know about consciousness...
It is often said that fusion is simply an engineering problem and rightly so because we do have the theory worked out for it.
But for conscious subjective awareness we don't even have a decent theory , how can we then jump to conclusions of which AI will take over which country or do what?

It seems to me we are getting far ahead of ourselves.
Over the years of reading this topic I have noticed that the ideas of what consciousness is also have changed with the times, in the second half of the 20th century many researchers just assumed that consciousness is just an emergent property of a what can be labeled as a complex biological computation taking place within the brain, even now many still think like that and if I can say so, I do feel this will be eventually proven not the case.
My simple reason for thinking so is that we now have had plenty of complex computer architectures and complex software run on them and nowhere has that even slightly shown any signs of subjective awareness , it seems to me you need more than just complex arithmetic , algorithms and neural networks to produce a self aware subjective mind, or maybe I'm wrong only time will tell.
But that's a whole different topic.
 
Last edited:
  • #392
artis said:
I would argue that first any AI will need to show the slightest sign of consciousness before it can do any of the work you stated.

Or it will simply be a tool in some conscious user's hands which it already is.Now I have had the experience of AI fans getting very upset at me for daring to say this and I never really say it to piss someone off rather just to state an obvious fact.
Currently we still don't have clear understanding of the "neural correlates of consciousness" in terms of how the known brain regions come together to form a subjective mind that can attach reason and meaning to any of the continuous huge stream of raw information entering our brain through our senses, I would think we might first want to fully probe our own workings until we devise a plan that can lead us to artificial one.
That being said I do leave the option open that we just might arrive at AGI by a random happenstance because one can hit the target even by shooting in the dark.

Make no mistake , intelligence is easy consciousness is not,
we do understand intelligence rather good we also have definitions for it and we can measure it by IQ points and put in in a scope/range etc, we have very little scientific theoretical understanding of what consciousness is and how to define it properly etc,

I mean we have cracked protein folding with AI which is a very complicated intelligent problem, but it seems subjective awareness with meaning is something on a whole different level,

I do feel cracking consciousness will be similar to making nuclear fusion practical, we have tried the latter for some 70 years now without much success, and , mind you, in fusion we at least know the theory 100% which is some 90% more than we know about consciousness...

But that's exactly my point. If you can't tell the difference - if it tells you it's conscious - then how do you test it?

EDIT: But youre right. With fusion and such at least we know the theory.
 
  • #393
sbrothy said:
I'm afraid that maybe you took my comment more serious than intended. But let's be serious for a second then:

What I worry about, especially in the near future, is that the children of tomorrow will be unable to recognize agency when taking a phone call or socializing online.

In the near future - someone interacting with you, where the lights seem to be on but in reality there's noone home, may look like strong AI but in reality be a "zombie" designed, or intentionally "parameterized" [sorry] to push your buttons specifically.

I'm too old to understand. I'm sure tomorrow's children will meander through what looks like a sociological minefield to me just fine. That I'm glad it isn't me "meandering" is probably very natural.

The next 50 years will be exciting for sure...
What actually freaks me out about my little exchange with ChatGPT is that it's already saying "we", not "you". :)
 
  • #394
sbrothy said:
But that's exactly my point. If you can't tell the difference - if it tells you it's conscious - then how do you test it?
Well that's John Searle's original point by the Chinese room experiment.
How do you tell that an entity which can perfectly manage a foreign language dictionary isn't a native speaker with all the cultural background etc and a conscious one at that.

I think there are examples of how one can tell true conscious subjective awareness apart from a cleverly programmed AI.

Here's one, I think the difference between a true subjective consciousness and a clever AI parrot of it is this, only true subjective conscious entity can make a deliberate mistake.
Because in order to make a deliberate mistake one has to not only be intelligent but also have personal motivation and reason as well as subjective meaning behind it.

Can a autonomous vehicle cross the red light on purpose? Not a chance, it can only cross the red light by mistake. A human on the other hand does it because for example he wants to get home faster because he wants to see his kids because he values his time and family and doesn't value other drivers on the same level , each of these things is a whole universe of information that has meaning attached to each bit of the information within it which together is not only a lot of information but also a very specially and specifically structured form of information , it's essentially a cryptographic labyrinth that seems so simple to us that we can answer it in one sentence when the police officer pulls us over " yes I was speeding/crossing red light because of that and this" but for an AI any of this information is nonsensical gibberish.
And the reason is that AI is simply intelligent but intelligence doesn't understand meaning, meaning is just another couple of bytes of additional information attached to the original information.

But how do you convert those bytes of meaning so that they actually begin to mean something...?
That's not as straight forward as converting from programming language to machine code etc...

So i'd say there's plenty of ways to spot a fake consciousness , one of them is to observe the lack of meaning and lack of reason behind specific errors.By the way we often don't think about it but just for the fun of it - no known computer as of yet can make a deliberate mistake, all computational mistakes are only 100% accidental ones.

On the other hand if someone drives under the influence for example and crashes, was that actually an accident? I would say not at all, it was a consciously made error.
You can only make errors like that if you have the understanding of all the countless complex issues involved, I have never heard an alcoholic who drives drunk who said that they did not know it was wrong or bad, literally every time the reason was that they either valued their own fun above the safety of others or they simply got tired of life and started affording certain "liberties" that they otherwise wouldn't.

Think about this from a computational point of view, how would an AI do it? It would need to translate certain information into specific form that can then measure it's meaning and it;'s relation to all other information , you can't do it simply by preprogrammed values because then either every AI will drive drunk (if it could) or they would never drive drunk or cross red lights.

It's easy to preprogram certain parameters into it or make it learn to behave in certain way by "reinforcement learning" of certain models but it's not understood how would one make it such that it can do things one way and then one day decide to do them differently but not by mistake but by deliberate action...

It is only when people think about this huge obstacle when they realize how hard subjective self awareness is.
 
  • #395
To reiterate on what I just said, intelligence is easy exactly because it has objectively discernible parameters like weights, numbers, laws, rules etc etc.

Subjective consciousness is hard exactly because it has no real objectively discernible rules or parameters it simply takes all incoming information and ascribes a subjective meaning to it and then goes on and uses that subjective meaning to make objective outputs,

the case of the red light crossing for example, the red light crossing itself is an objective action performed by a real individual whose brain made real measurable outputs to achieve that action , what is not measurable or defineable is the subjective meaning that made the outputs, yet that meaning is real, it used brain resources and energy to exist and real actual neurons were involved in it's functioning but it in an of itself is not easily parametrized, you cannot simply write a code that will execute it , because then it will only execute it randomly but we do know our wrong choices are never really random, their almost always premeditated.

You can't really randomly rob a house or a liquor store or be mean to a child or misbehave in traffic etc etcIn terms of consciousness we have done the easy part so far, we have tested and seen nerve input signals to brain, visual, auditory, sense etc , we also know where they go, we also know the outputs and what they do, we can measure brain waves etc etc, the hard part is figuring out how those rather mundane signals and frequencies come together to create new information where there is none (create meaning and attach it to certain inputs) and then act upon that meaning to predict, and perform certain tasks,
And we do know that brains don't run on software so this makes it even more interesting, somehow it's all hardwired in us by neural connections which do change though during life as we get new information and act upon it.
 
Last edited:
  • #396
Stumbled over this one just now. It looks relatively recent and, considering the subject, like an easy read:

Could a Large Language Model be Conscious?.

EDIT: Oh "Transcript of a talk given November 28, 2022". Anyway....
 
  • #397
sbrothy said:
In the near future - someone interacting with you, where the lights seem to be on but in reality there's noone home, may look like strong AI but in reality be a "zombie" designed, or intentionally "parameterized" [sorry] to push your buttons specifically.
Surely AI influencers are in our future. Such has already been semi-automated in "troll farms." A single individual manages a number of puppets who befriend the marks. An AI influencer will be all of cheaper, more controllable, and more effective.

A software entertainer named Hatsune Miku is a star in Japan and has a worldwide following in the multi-millions. A candidate for the Diet tried to get her endorsement. She even does sold out concerts in stadia, appearing as a "hologram." She isn't paid a salary or an appearance fee. There is no possibility of a scandal or Taylor Swift-style contract dispute. What's not to like?

AI nudging "friends" will replace the heavy handed censorship of today, which outrages and alienates the censored. Rock them to sleep gently instead. Get them to love it.
 
Last edited:
  • #398
https://crfm.stanford.edu/2023/03/13/alpaca.html

Stanford says they have a child of the OpenAI system that can be trained for $600. It's performance is comparable to a system that took five million dollars to train. The basic method is to use a big AI to train a little AI. Copies are available for academic research but not commercial purposes.
 
  • #399
ChatGPT seems conscious to me. What care I the methods it may use?
 
  • Skeptical
  • Wow
  • Like
Likes nuuskur, russ_watters, artis and 2 others
  • #400
artis said:
our wrong choices are never really random, their almost always premeditated.
So I thought about it then chose the wrong thing instead of flipping a coin, you say.

Sometimes I literally flip a coin. Other times I don't try to figure it out and just do the first thing that comes into my head, to get it over with and avoid dithering.

There was a bridge player who was asked how he could make difficult decisions so rapidly. He said, I know I can't figure it out so I just do what I feel like doing. (This avoids the other side using a pause as a useful clue.)
 

Similar threads

Back
Top