Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #301
PeterDonis said:
One could argue that ChatGPT does have at least one characteristic of human intelligence (if we allow that word to be used for this characteristic by courtesy), namely, making confident, authoritative-sounding pronouncements that are false.

But I don't think that's the kind of "AI" that is being described as inspiring fear in this discussion.
Yes, that's largely my point. For its purpose I think ChatGPT is being called "AI" in large part because can construct grammatically correct sentences in English. Otherwise it is simply a multi-stage search engine. This seems like a really low bar to me, and one I don't think proponents of AI really intend or at least imply.
 
Computer science news on Phys.org
  • #302
Jarvis323 said:
Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.

AI is also very, very advanced in understanding human behavior/psychology. Making neural networks able to understand human behavior, and training them how to manipulate us, is basically, by far, the biggest effort going in the AI game. This is one of the biggest threats currently IMO.

gleem said:
So far most AI is what I call "AI in a bottle". We uncork the bottle to see what is inside. The AI "agents" as some are called are asked questions and provide answers based on the relevance of the question to words and phrases of a language. This is only one aspect of many that true intelligence has. AI as we currently experience it has no contact with the outside world other than being turned on to respond to some question.

However, researchers are giving AI more intelligent functionality, Giving it access to or the ability to interact with the outside world without any prompts may be the beginning of what we might fear.

Selecting a few stanzas from the song "Genie in a Bottle", it might depict our fascination and caution
with AI sans the sexual innuendo. Maybe there is some reason to think AI might be a D'Jinn...
I'm a genie in a bottle (I'm a genie in a bottle)​
You got to rub me the right way​
-Christina Aguilera
OMG.

Ok, so the basic flaw in War Games is the fact that nuclear weapons aren't connected to the internet. Terminator starts with the same thing, but then they move on to killer robots, which we can sorta do now, but nothing as impressive as the T-1000. But they aren't AI, so...

...well, for that matter, neither is the computer in War Games.

I'm reminded of a Far Side cartoon from the '80s with two married amoeba apparently in an argument. One says to the other: "Stimulus-response, stimulus-response -- don't you ever think?" Does AI? Do we?
Jarvis said:
As far as I know, AI is currently very good at, and either is or will probably soon excede humans (in a technical sense) in, language skills, music, art, and understanding and synthesis of images. In these areas, it is easy to make AI advance further just by throwing more and better data and massive amounts of compute time into its learning/training.
In my opinion none of that qualifies as AI. Any device to augment human computation ability, dating back to the abacus, does that. With the possible exception of art, but not being a big art person it is a tough thing for me to understand/judge. Music, definitely not though. Note; in my youth I was a "technically" good trumpet player who could to the untrained ear in certain circumstances be mistaken for a good musician.
I am not aware of an ability for AI to do independent fundamental research in mathematics, or that type of thing. But that is something we shouldn't be surprised to see fairly soon IMO. I think this because AI advances at a high rate, and now we are seeing leaps in natural language, which I think is a stepping stone to mathematics. And google has an AI now that can compete at an average level in codeing competitions.
The math thing is interesting; if math is pure logic, then perhaps AI should be able to do all of it, like running every possible move on a chess board?
 
  • #303
For all my criticism, I do have what I think is a plausible destructive AI scenario to present:

HackBot. It's a hacker bot with capabilities that exceed the best human hacker and a speed faster than all of them combined. It can be set loose to steal all the gold in Ft Knox money in Citibank. Thoughts?

Possible pitfalls:
  • Is all the gold in Ft Knox money in Citibank accessible from the internet?
  • Can "AI" break any encryption or external factor authentication?
Maybe it could start by stealing something smaller/softer, like Bitcoin?
 
  • #304
Melbourne Guy said:
I can't recall if the Boston Dynamics-looking assault rifle toting robot has been mentioned, but it's genuinely scary!
gleem said:
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
AIM-9 Sidewinder: introduced, 1956.
 
  • #305
Oldman too said:
Most construction sites that I've worked on have a standing caveat. "No one is irreplaceable" (this applies to more situations than just construction). As an afterthought, I'll bet the white hats will be the first to go if AI takes over.

Oldman too said:
On any typical construction site, "white hats" denote a foreman or boss. Besides the obligatory white hard hat, they can also be identified by carrying a clipboard and having a cell phone constantly attached to one ear or the other. They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.

Melbourne Guy said:
I agree, @Oldman too. If you are investing in ML / AI to replace labour, picking off the highest-paid, hardest-to-replace roles seems economically advantageous to the creator and the buyer of the system.

gleem said:
If you want to save some serious money replace the CEO, COO, CFO, CIO, CTO well maybe not the CTO since he might be the one doing the replacement. After all, they run the company through the computer system reading, writing reports, and holding meetings all of which AI is optimally set to do.
That's backwards, both from a technical and economic standpoint:

1. Higher-end employees tend to be harder to replace. Their jobs are complex. That's part and parcel of why they are paid more.. They'll be the last for AI to replace.

2. CEOs get a lot of superficial flak these days about being overpaid (deserved or not), but they are not as expensive as most people appear to believe. CEOs are of course the highest paid employees in a company, but they are not the most expensive employees because there's only one of them. The most expensive employees are generally whomever there is the largest number of, and most types of employees are way, way more prevalent and expensive than CEOs.

Consider for example, order-takers at McDonalds, vs the CEO. The CEO makes about $20 million a year, and a full-time equivalent order-taker about $20,000. Let's say there's an average of two order-takers working at a time (6 full-time equivalents) across 38,000 stores, or 228,000 full-time equivalent employees taking orders. At an average of $10 / hr, that's about $5 billion a year. McDonalds is currently replacing these employees with giant iPads, and if it can replace half of them, that would save 100x more money than replacing the CEO would save.

Also, these giant iPads aren't AI. Order-taking isn't complex enough to require it.
 
  • #306
I'm sorry if this is too critical to some of the participants , please excuse , but this discussion is, I believe , the biggest "nothingburger" currently on this forums.

Without going into much detail lets just say I have had a rather extensive chat with a couple of AI researchers that I know , one of them an older guy is more of a neurologist who is retired and is basically interested in anything to do with human consciousness etc.

Within this community there is this great wish and belief that they will (well at least some among them eventually) crack the secrets of human consciousness and then be able to induce the same total complex behavior within a software run hardware to essentially simulate consciousness artificially.
This is the great AGI moment or artificial general intelligence.

Just to be perfectly clear , even among the fanatics , we are actually nowhere near that point nor do we know whether we will ever be for a variety of complicated reasons I will not go into right now.So what is current AI?
Current AI is at best a database that can manage itself , a code that can do a bit more than it's basic input parameters and functions and in industry terms just a better automated robot among the many already automated robots that we have.Again , sorry if this makes someone feel attacked, sadly these days we have to constantly apologize before any more serious "truth phrase" is uttered but currently the only jobs AI can replace are either physical labor jobs or the types of white collar jobs that have been useless for quite a while.
Like some database oversight guys whose only function is to check for some errors within a database etc, sure enough they can retrain and go do a more demanding job and they would have to anyway because software progresses as such and it requires less people anyway.Arguing @russ_watters point I would say AI has increased industrial automation and that's it, automation has been going on for long now as @Jarvis323 already said it's just that now we can automate faster and on a larger scale, its somewhat like the expansion of the universe that was expanding but then accelerated.

To put it bluntly , sure, you can't automate as much using relay logic as you can using 10nm architecture chips running AI software but then again what are the options ? To stop software and chip progress and stay at relay logic?

That being said current AI is nowhere near sentient it's just a John Searle Chinese room version of AI, it has the capacity for damage but only within the hand of skilful human users that intend to use it as their "bionic arm" to help them for example make a more damaging virus etc etc.

That being said I do believe some professions will be affected disproportionately more than others, if we eventually manage to make AI assisted driving "a thing" then sure truck drivers will feel that most likely.
 
  • Like
Likes russ_watters
  • #307
russ_watters said:
Yes, that's largely my point. For its purpose I think ChatGPT is being called "AI" in large part because can construct grammatically correct sentences in English. Otherwise it is simply a multi-stage search engine. This seems like a really low bar to me, and one I don't think proponents of AI really intend or at least imply.
ChatGPT is basically a large database that can decipher your input then compare that input to the large set of info it has in store and give you an output based on the type of language patterns it has learned from the internet.
It's basically just a automated dictionary and a very great and close example for the John Searle Chinese room experiment.

The very proof of this is that before software engineers corrected it manually it gave out racist and other hateful language, it did this exactly because it repeated the language type and pattern that it had learned from the internet.

If it was anywhere close to truly intelligent it would have noticed that countless other parts in the same internet where people talked about racism being bad.
It did not, it couldn't because words as such have no meaning for it, none - zero!
Words for it are much like for any other computer - just inputs that it turns into binary code based on some predetermined interpretation set and then it finds matching outputs to display, the "intelligent" part of ChatGPT is simply the fact that it can give you an output based on models of reasoning that it has learned from humans through the internet, but it doesn't understand those models it simply copies them.

Basically like a child repeating every phrase a grown up says making the child sound smart but in all actuality the child is as clueless as a potato.
Similarly to how a politician reads from a teleprompter, the only real job is to read the words correctly.Speaking of politicians... I think I found the one job AI could truly be good at and no one would notice a difference...

Currently it seems Biden is run by his administration and not the other way around , they could simply switch him over with BidenGPT 4.0 and nothing would change, just an observation, don't get mad at me...
 
  • Like
  • Skeptical
Likes gleem and russ_watters
  • #308
artis said:
ChatGPT is basically a large database that can decipher your input then compare that input to the large set of info it has in store and give you an output based on the type of language patterns it has learned from the internet.
It's basically just a automated dictionary and a very great and close example for the John Searle Chinese room experiment.

The very proof of this is that before software engineers corrected it manually it gave out racist and other hateful language, it did this exactly because it repeated the language type and pattern that it had learned from the internet.

If it was anywhere close to truly intelligent it would have noticed that countless other parts in the same internet where people talked about racism being bad.
Agreed. At the risk of political commentary I'd suggest that racists aren't A...I either. But yes, that's the point. it's not far beyond a parrot that got ahold of a database. Also, the programmers didn't even provide the database apparently, they just linked it to the internet.
 
  • #309
russ_watters said:
Agreed. At the risk of political commentary I'd suggest that racists aren't A...I either. But yes, that's the point. it's not far beyond a parrot that got ahold of a database. Also, the programmers didn't even provide the database apparently, they just linked it to the internet.
It seems so, well that being said it's still no small feat to achieve this, or the other programs Alphafold that can simulate any protein folding which is a process very complicated, so alot to marvel about.

That being said what I don't like about ChatGPT probably the most is that I have to cross check the information that it gives me because it tends to get it wrong from time to time.
Thank god nobody fed it the flat earthers forum database or anything like that but still it has given me some obviously sketchy answers so far.
 
  • Like
Likes russ_watters
  • #310
Just as a side note, one of my AI fanatic friends truly believes that contrary to R.Penrose claims "consciousness is just a complex biological/electrochemical computation" and he thinks we will eventually be able to load human consciousness onto a hardware special purpose computer and that consciousness will be able to live much longer than the timespan of our body.

The AI form of transhumanism essentially

While we discussed this besides the other counterpoints I said that in that case one should go live in a country that has a stable electrical grid... otherwise it might be very detrimental for the well being of his consciousness.
 
  • #311
I think sensationalism has done its part to instill this "fear". I don't fear "AI" itself, it's just a piece of code. In fact, I think machine learning is a useful tool for solving combinatorially difficult problems. The AI-generated voices/faces worry me as far as identity theft is involved. Poses a challenge for e-security.

I believe it is more accurate to say this tool makes me fear people with malicious intent and the necessary skill to exploit said technology. But that's no different than saying I fear evil people holding knives. There's nothing specifically about AI (machine learning) that makes me afraid of it.
 
Last edited:
  • Like
Likes russ_watters and artis
  • #312
russ_watters said:
But yes, that's the point. it's not far beyond a parrot that got ahold of a database. Also, the programmers didn't even provide the database apparently, they just linked it to the internet.

Have you tried it?
 
  • #313
artis said:
The very proof of this is that before software engineers corrected it manually it gave out racist and other hateful language, ...

Are you talking about reinforcement through human feedback?
 
  • #314
nuuskur said:
I believe it is more accurate to say this tool makes me fear people with malicious intent and the necessary skill to exploit said technology.

So you fear basically everyone with malicious intent that has basic computer skills?

That's a lot of people, including nation states, terrorist groups, and criminal organizations.

You don't think there are more dangerous uses of AI that bad actors could think of than deep fakes?
 
  • #315
I did mention cyber security. One does not produce ML algorithms with basic computer skills, drop the melodramatics, would you kindly?

A lot of people have this perception that "AI" is some kind of magician that turns water into wine. NP problems do not become P problems out of thin air. Problems that are exceedingly difficult (or even impossible) to solve do not become (easily) solvable. Machine learning is an optimisation tool and it's very good for it. If a problem is proved to be with complexity ##\Omega (n^k)##, but the best algorithm found so far can do it in exponential time, it might happen that "AI" finds a better solution, but it can never improve the result past what is proven to be the limit.

I like sci-fi as much as the next person, but let's keep that separated from the real world.

Now, proceed with the doomsday sermon.
 
Last edited:
  • Like
Likes TeethWhitener, russ_watters and artis
  • #316
nuuskur said:
I did mention cyber security. One does not produce ML algorithms with basic computer skills, drop the melodramatics, would you kindly?

All of the dangerous uses of AI you touched on are accessible to anyone with basic computer skills is my point.

nuuskur said:
NP problems do not become P problems out of thin air.

This statement doesn't make sense.

nuuskur said:
Problems that are exceedingly difficult (or even impossible) to solve do not become (easily) solvable. Machine learning is an optimisation tool and it's very good for it. If a problem is proved to be with complexity ##\Omega (n^k)##, but the best algorithm found so far can do it in exponential time, it might happen that "AI" finds a better solution, but it can never improve the result past what is proven to be the limit.

Which AI threats are you ruling out based on this argument?
 
  • #317
Jarvis323 said:
Are you talking about reinforcement through human feedback?
I believe that is a very advanced and complicated way of saying "bunch of programmers tweaking their code" so that it excludes certain phrases considered by humans as hurtful.

The AI itself had no understanding of this nor could it, it doesn't understand meaning as that is a complex subject only a true sentient being can grasp and even then not all of them anyway...

I do not like the word reinforcement used in this context because it makes it sound as if the AI is "learning" and just needs some "teaching/reinforcement" to understand better, that is not the case. An AI could only learn if it had awareness and understanding of meaning but that can only arise in conscious beings that are subjective.
Meaning without subjectivity is meaningless!
 
  • #318
artis said:
I believe that is a very advanced and complicated way of saying "bunch of programmers tweaking their code" so that it excludes certain phrases considered by humans as hurtful.

This is a fundamental misunderstanding of not just how it was done, but also what the thing is and how it works.
 
  • Skeptical
Likes russ_watters
  • #319
Jarvis323 said:
All of the dangerous uses of AI you touched on are accessible to anyone with basic computer skills is my point.
Fine, semantics..
Jarvis323 said:
This statement doesn't make sense.
That's a relief. Many doomsday preachers would say things like "if NP=P then we are all doomed" and then they heard about AI and said "now AI is gonna break all cyber security and we are all doomed" and so on and so on. The basis for those arguments is that problems that are difficult to solve become easily solvable via some blackbox magic, which you thankfully agree with is not the case.
Jarvis323 said:
Which AI threats are you ruling out based on this argument?
You're getting ahead of yourself. I never said anything was ruled out.
 
  • Haha
Likes russ_watters
  • #320
By the way , to all the AI scared people. Think of it like this. An AI self driving car can only cross the red light by mistake, it can never cross the red light knowingly, because it doesn't know what it means to make a deliberate mistake as that would require both consciousness as well as subjectivity which are arguably two sides of the same coin.
AI has neither, it doesn't know a damn bit about meaning nor is it subjective, electrons in digital switching circuits represented by binary code arranged in a special way that we refer to as a ML algorithm is actually just matter , this alone I think proves that consciousness is an emergent property of something more than just a complex calculation as many of the materialistically minded AI researchers would love to think.

So be at ease your self driving tesla won't spy/report on your mistress anytime soon...
 
  • Like
Likes russ_watters and Aperture Science
  • #321
Jarvis323 said:
This is a fundamental misunderstanding of not just how it was done, but also what the thing is and how it works.
Please explain then, I'm never above or too full of myself to not listen, so go ahead
 
  • #322
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.
 
  • Like
Likes russ_watters, artis and Aperture Science
  • #323
artis said:
Please explain then, I'm never above or too full of myself to not listen, so go ahead

You can read about it here.

https://en.m.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

It basically involves training the model by examples. People rank the examples (e.g., good or bad) and the training algorithm uses those ranked examples to automatically update parameter values through back propagation, to new values which result in (hopefully) the encoding of some general rules implicit in the relationships between the examples and their ratings, so that it can extrapolate and give preferred answers to new similar prompts.
 
Last edited:
  • #324
russ_watters said:
What, Dave, what's going to happen?

Something Wonderful. . . . :wink:

.
 
  • Like
  • Haha
Likes russ_watters and nuuskur
  • #325
nuuskur said:
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.

The point of AI is that it enables task automation. So you need to consider at what level of automation you semantically call it fear of the AI itself instead of the persons who ultimately pointed it at you or bear responsibility.

Lets consider an example based on guns. You can say I don't fear guns, I fear people, and that makes some sense. Now, lets consider a weapon where the user enters a persons name into a form and presses a button, and then the weapon tracks them down and kills them. Is that enough to be afraid of AI instead of the button pusher? How about if instead of having to manually enter a name, they just manually enter some traits, and then an algorithm identifies everyone with the traits and sends the weapons out?

So far it still arguably might fall into the AI doesn't kill people, people kill people category, because the AI doesn't really make a non-transparent decision. You could go further and say what if a person just instructed the AI to eliminate anyone who is a threat to them. Then there is some ambiguity, the AI is deciding who is a threat now. So there is additional room to fear the decision process. You could go further, and suppose you instructed the AI to act in your interest, and as a result the AI decides who is a threat and eliminates them.

Anyways, we obviously need to worry about the AI-human threat even in the absence of non-transparent AI decision making. There is also room to fear AI decision making whenever it becomes involved in making subjective or error prone decisions. But people make bad or threatening enough decisions as it is.

The actions here could be generalized for understanding some of the threat profile. Instead of kill, it could just injure, steal, manipulate, convince, misinform, extort, harass, threaten, oppress, or discriminate, etc. Instead of asking it to identify physical threats you could ask it to identify political threats, or economic threats or competition, or people who are vulnerable to a scam, or people who are right wing or left wing, or some race or gender, or ethnicity or nationality, etc.

Now imagine thousands of AI based systems simultaniously being put into continuous practice automating these kinds of actions over large AI created lists of people, on the behalf of thousands of different criminal organizations, terrorist groups, extremist groups, corporations, political organizations, militaries, and so on.

That would comprise one of many possible reasons why a person might want to fear AI or people using AI.

Beyond that you could fear economic disruption and job loss (which isn't that clearcut because technically greater efficiency should lead to better outcomes if we could adapt appropriately). You could fear unintentional spreading of misinformation. You could fear negative impacts to mental health based on the potential for more addictive digital entertainment, you could fear existential crisis, you could fear the undermining of democracy, you could fear unchecked power accumulation and monopoly, you could fear excessive surveillance or a police state, you could fear over-dependence leading to incompetence, etc.

It is such a complicated profile of threats that it is hard to wrap the mind around in my opinion. A very significant number of those threats are new and now current real world threats.
 
Last edited:
  • #326
I think the first thing we need to understand is what a program is and that true consciousness is.

In my opinion, a program only does what it's users and pogrammers tells it to do and nothing more, regardless of now "intellegient" and "powerful" it is. If you ask a planet-size computer to clean a room, it will ONLY clean the room. It will never just suddenly decide to rule the human race because it is not in its programming.

I also still dont think AIs are capable of emotion and true consciousness because we still dont know how our brains work and therefore it will not be possible to simulate them using a determinstic machine.

Lets think about the example of a personality trait "liking cats". I do not like them because I am allergic to them, so when someone mentions cats, my brain will make me think about the times I tear up uncontrollably in the presence of a cat. For those who do like cats, they will think about their cats and their beautiful memories together. It triggers a lot of emotions and we do this effortlessly as humans.

How would an AI ever do that? How does it ever interpret "cats" like we do? As images and memories (whatever that is in programming terms) and as not 1s and 0s.

Right now you can program all kinds of personalities and characteristics into AIs and make them look real , but it is just the duplication of the traits from the programmer to the program. My point is how does AIs ever have any"feelings" or "traits" that are "original" and "genuine", as in ones that are not raught by us? If they cant do this, how can we call them true consciousness and attribute any human characteristics to them?

I think any attempt to anthropomorphize AIs is unrealistic and stupid. Too many videogames and movies portrait AIs like humans with feelings to be hurt while in reality they are just tools that make human noises to make them more like us. Would you feel had if you dropped a hammer on the floor? Would the hammer feel bad?

There are no "good" or "evil" AIs, there are only good and evil programmers.

So I do not fear an AI overlord or an AI uprising, as I think those are impossible, provided they are in the right hands. Well maybe I should fear them the same way I fear any WMD, since they can be used by the ill-intended.

However, the more realistic concern regarding AI is that I believe it could think very differently from us and can misinterpret our orders, which leads to unexpected results.

For example, that planet-size computer I mentioned before may have its own interpretation as to what "clean the room" actually means. It could interprete cleaning the room as removing anything in it, which could include myself, and therefore it concludes that it should just blow me into atoms with lasers. But still, this is within its predetermined goal, since the room is in fact clean now. If we ask the same computer to "make everyone happy", it could by force put everyone in a simulation where we live the best life we could ever imagine and preserve our bodies in pods like in the Matrix.

But ultimately it is our fault that it thinks differently than us because we failed to consider all the possibilities when making the program. This is I think the biggest risk involved in dealing with an AI with great capabilities, because it is our responsibilities to regulate an AI's behavior. My point is, as AIs become more and more powerful, we do need to be more and more specific and careful in programming them and setting boundaries as to what they can and cannot do. We also need to think more and more like them to make sure they do exactly what we intend them to do.
 
  • #327
Aperture Science said:
I think the first thing we need to understand is what a program is and that true consciousness is...

The first thing to do is understand what machine learning is.

Aperture Science said:
But ultimately it is our fault that it thinks differently than us because we failed to consider all the possibilities when making the program.

All of the possibilities of back propagation? What kind of programs are you imagining?
 
  • #328
Jarvis323 said:
The first thing to do is understand what machine learning is.
All of the possibilities of back propagation? What kind of programs are you imagining?
I know very little about machine learning and programs. I only know Google is using captcha to let users train their AI to indentify traffic pictures. That's about it.

What I was referring to by "possibilities" are all the ways an AI can solve a problem within its capabilities. Although desired results are achieved, there may be unwanted side effects. So I was trying to make the point that when developing AIs we should be cautious and set adquate and clearly-defined goals and restrictions as AIs gradually will have more resources at their disposal.

One example would be something like Asimov's three laws of robotics (though still a fictional work). Another one would be the backpropagation you mentioned (i just googled what it is).

im just an enthusiast so i dont know much detail.

:)
 
  • #329
A.I. race should pause for six months, says Elon Musk and others
https://finance.yahoo.com/video/race-pause-six-months-says-212621876.html

Tech leaders urge a pause in the 'out-of-control' artificial intelligence race​

https://www.npr.org/2023/03/29/1166...e-out-of-control-artificial-intelligence-raceAI could become a danger if it is put 'in control' of critical systems, and and the input includes erroneous data/information.

AI for analysis of large data sets can be useful, but if the input is incorrect, then erroneous or false conclusions may occur. It may be minor, or it could be major, or even severe.

Remember - garbage in, garbage out.

How will AI check itself for error/misinformation/disinformation?
 
  • Like
Likes Lord Jestocost and Greg Bernhardt
  • #330
Astronuc said:
A.I. race should pause for six months, says Elon Musk and others
If there is money to be made, a pause will never happen. Also, Elon is just bitter that he couldn't buy OpenAI.
 
  • Like
Likes PeterDonis, russ_watters, dlgoff and 2 others

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 51 ·
2
Replies
51
Views
8K