Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #251
Astronuc said:
This is a rather critical aspect. How will AI connect with the human world? Controlling power grids? Controlling water supply? Controlling transportation systems, e.g., air traffic control? Highway traffic control?
It is, and all of those systems are commonly cited as examples of where AI can provide better outcomes (usually lower cost and fewer errors) than people do. Certainly, clever pattern matching algorithms can reduce cost and error rates in those domains, but they are not 'intelligent' in the sense humans generally mean by the term, and it is not clear to me how or why their 'intelligence' would grow such that they became a threat (or even a help) beyond the specific parameters set by their original model.

But a "4th or 8th grader" in charge of a large real-world network / system, could cause havoc "just because", and that's really likely, even if it is via a programming bug rather than self-aware mischief making.
 
Computer science news on Phys.org
  • #252
The danger of AI resides in how much capability we will not recognize that it has and how much control we will give it. Like nuclear energy AI will be developed by anybody. Like nuclear energy, there was an initial barrier to its widespread development and implementation but with time those barriers became lower—the same with AI. Initially, AI development was limited by time and the use of massive computer resources.

Recently Cerebras a computer chip design company produced the largest CPU ever obviating the need for the thousand of GPUs normally needed to develop advanced AI. They claim that a computer incorporating this chip will be able to handle 100 times more parameters than current AI models such as GTP-3. A computer with this chip will reduce the cost of development by making programming much easier, reducing the power requirements, and reducing the training time for neural networks from months to minutes.

Should we be concerned? Will this development be like fire in the hands of a child? Will common sense prevail?
 
  • #254
gleem said:
Should we be concerned?
Yes.

gleem said:
Will this development be like fire in the hands of a child?
Probably.

gleem said:
Will common sense prevail?
No!
 
  • #255
https://www.nature.com/articles/d41586-022-01921-7#ref-CR1

"Inspired by research into how infants learn, computer scientists have created a program that can learn simple physical rules about the behaviour of objects — and express surprise when they seem to violate those rules. The results were published on 11 July in Nature Human Behaviour1."
 
  • Wow
Likes Melbourne Guy
  • #256
So far most AI is what I call "AI in a bottle". We uncork the bottle to see what is inside. The AI "agents" as some are called are asked questions and provide answers based on the relevance of the question to words and phrases of a language. This is only one aspect of many that true intelligence has. AI as we currently experience it has no contact with the outside world other than being turned on to respond to some question.

However, researchers are giving AI more intelligent functionality, Giving it access to or the ability to interact with the outside world without any prompts may be the beginning of what we might fear.

Selecting a few stanzas from the song "Genie in a Bottle", it might depict our fascination and caution
with AI sans the sexual innuendo. Maybe there is some reason to think AI might be a D'Jinn.

I feel like I've been locked up tight​
For a century of lonely nights​
Waiting for someone to release me​
You're lickin' your lips​
And blowing kisses my way​
But that don't mean I'm going to give it away​
If you want to be with me​
Baby, there's a price to pay​
I'm a genie in a bottle (I'm a genie in a bottle)​
You got to rub me the right way​
If you want to be with me (oh)​
I can make your wish come true (your wish come true oh)​
Just come and set me free, baby​
And I'll be with you​

I'm a genie in a bottle, baby​
Come come, come on and let me out​
 
  • #258
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
 
  • #259
Melbourne Guy said:
I can't recall if the Boston Dynamics-looking assault rifle toting robot has been mentioned, but it's genuinely scary!
Scary? yes, but that's a knock-off bot, check out the Russian theme. Spot's potential was made pretty clear by MSCHF in I personally I think the video in the TT article was a bit of sensationalism on the part of a particular country, Scary? very, but it gets better/worse. This is what spots creators are showing off lately.
https://www.bostondynamics.com/atlas

It's also a pretty good bet that the DARPA dog in the video could also handle the auto-fire recoil a lot better than the knock-off
 
  • #260
gleem said:
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
AI in my novels are often distributed swarm minds, it's a pretty common theme in sci-fi, these small units could be peripherals of a larger set, communication by RF. You'd think that would be easy to interfere with, but spread spectrum radios can be resistant to jamming!

Oldman too said:
Scary? yes, but that's a knock-off bot, check out the Russian theme.
Hadn't seen Spot, I wonder if those were used in that recent War of the Worlds TV series? But that aside, it's straightforward to imagine a hostile AI either taking over bots like this, or crafting their own versions. That's all in the future of course, at the moment, we have to design and build the tools of our own downfall :nb)
 
  • #261
I've seen worse. Drone with a flamethrower.
 
  • Sad
Likes Melbourne Guy
  • #262
profbuxton said:
[...] Even [Asimov's] famed three laws of robotics didn't always stop harm occurring in his tales. [...]

I was under the impression that his three laws of robotics was a plot-device invented to drive his stories. The flaws presumably intended to provide drama. As an insurance against out-of-control AI they sound way too catchy. :)
 
  • Like
Likes russ_watters
  • #263
sbrothy said:
I was under the impression that his three laws of robotics was a plot-device invented to drive his stories. The flaws presumably intended to provide drama. As an insurance against out-of-control AI they sound way too catchy. :)
I mean when something related to AI references itself as his "laws" do you can be sure it's going to be "exciting". :)
 
  • #264
Former Google CEO Eric Schmidt gives a warning about the world's lack of preparedness to deal with AI.

 
  • #265
I've been listening to a presentation by a company that is developing autonomous machines, one application of which is construction equipment or heavy machinery with the objective of replacing human operators with AI systems that monitor a variety of sensors that permit the AI controller to be 'aware of the environment'. So, like autonomous cars, trucks, trains, ships, planes, (human) heavy equipment operators can be replaced by a computer system.

I reflect on locomotives, which have hundreds of sensors to monitor the condition of the prime mover, power conversion system, and traction system. Apparently, any piece of equipment can be modified to replace a human operator, and the control is much smoother, so less wear and tear on the equipment.

A number of tech companies are sponsoring the research.
 
  • #266
Astronuc said:
So, like autonomous cars, trucks, trains, ships, planes, (human) heavy equipment operators can be replaced by a computer system.
Most construction sites that I've worked on have a standing caveat. "No one is irreplaceable" (this applies to more situations than just construction). As an afterthought, I'll bet the white hats will be the first to go if AI takes over.
 
  • #267
Oldman too said:
As an afterthought, I'll bet the white hats will be the first to go if AI takes over.
What's a white hat?
 
  • #268
On any typical construction site, "white hats" denote a foreman or boss. Besides the obligatory white hard hat, they can also be identified by carrying a clipboard and having a cell phone constantly attached to one ear or the other. They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
 
  • Skeptical
  • Like
Likes russ_watters and Melbourne Guy
  • #269
Oldman too said:
they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
Do you have any idea what will replace them?
 
  • Like
  • Informative
Likes russ_watters and Oldman too
  • #270
gleem said:
Do you have any idea what will replace them?
Not in least, but it will probably involve artificially intelligent algorithms.
 
  • #271
gleem said:
Do you have any idea what will replace them?
"Chimps on rollerskates?"
 
  • Like
  • Informative
  • Love
Likes russ_watters, Oldman too and gleem
  • #272
Oldman too said:
They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.
I agree, @Oldman too. If you are investing in ML / AI to replace labour, picking off the highest-paid, hardest-to-replace roles seems economically advantageous to the creator and the buyer of the system.
 
  • Skeptical
  • Like
Likes russ_watters and Oldman too
  • #273
If you want to save some serious money replace the CEO, COO, CFO, CIO, CTO well maybe not the CTO since he might be the one doing the replacement. After all, they run the company through the computer system reading, writing reports, and holding meetings all of which AI is optimally set to do.
 
  • Skeptical
  • Like
Likes russ_watters and Oldman too
  • #274
gleem said:
If you want to save some serious money replace the CEO, COO, CFO, CIO, CTO
Well, hopefully the C-suite is providing strategy, inspiration, leadership, and capital raising activities that are so far hard for AI to replicate, but yeah, eventually...
 
  • #275
Melbourne Guy said:
eventually...
That's what scares me.
 
  • Haha
Likes Melbourne Guy
  • #276
Oldman too said:
That's what scares me.
That's why I'm lowest man on the totem pole, @Oldman too. By the time Colossus comes for me, I'll be well retired :biggrin:
 
  • #277
Melbourne Guy said:
That's why I'm lowest man on the totem pole, @Oldman too. By the time Colossus comes for me, I'll be well retired :biggrin:
A good plan!
 
  • #278
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?

An update of GPT4, GPT4.5, is expected around Oct of this year.
 
  • #279
gleem said:
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?
Nothing/still no fear. I see a few comments from last year/above I disagree with too, and the succinct reply is I think people watch too many movies.
 
Last edited:
  • #280
gleem said:
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?

An update of GPT4, GPT4.5, is expected around Oct of this year.

There was an open letter released today calling for a pause.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/
 
  • #281
Jarvis323 said:
There was an open letter released today calling for a pause.
Yes, to establish rules for implementation and safeguards of AI. Additionally, Goldman Sachs has issued a report that projects 300M jobs worldwide could be replaced with the US Isreal, Sweden, Hong Kong, and Japan most likely to be affected. If you are in HS or college just entering the workforce you have to make some decisions. Are there enough jobs in the near term to replace those that have been eliminated?

But if you're sitting in front of a computer for work, you may have something to worry about. Office and administrative support jobs are at the highest risk at 46%. Legal work follows at 44%, with architecture and engineering at 37%.
https://www.msn.com/en-us/news/othe...n&cvid=9c1a7e825ef341ad8a6ae4dd148361a0&ei=25
 
  • #282
gleem said:
Are there enough jobs in the near term to replace those that have been eliminated?

It's difficult to answer, because if making sure people have jobs is the goal, then what we have to do is find new jobs for displaced workers as quickly as AI displaces workers. But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.

So there are two problems. The first is how we can figure out a way of life where it's ok if nobody has a job. And the second problem is, how can we safely get through/transition from this way of life to the other. And it's especially hard because we aren't really (seemingly) in control, and don't know for sure how fast it will happen or exactly what lies ahead along the way.
 
  • #283
Jarvis323 said:
It's difficult to answer, because if making sure people have jobs is the goal, then what we have to do is find new jobs for displaced workers as quickly as AI displaces workers. But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.

So there are two problems. The first is how we can figure out a way of life where it's ok if nobody has a job. And the second problem is, how can we safely get through/transition from this way of life to the other. And it's especially hard because we aren't really (seemingly) in control, and don't know for sure how fast it will happen or exactly what lies ahead along the way.
To me, these are good reasons to fear AI.
 
  • #284
Jarvis323 said:
It's difficult to answer, because if making sure people have jobs is the goal, then what we have to do is find new jobs for displaced workers as quickly as AI displaces workers. But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.

So there are two problems. The first is how we can figure out a way of life where it's ok if nobody has a job. And the second problem is, how can we safely get through/transition from this way of life to the other. And it's especially hard because we aren't really (seemingly) in control, and don't know for sure how fast it will happen or exactly what lies ahead along the way.
People think they hate their jobs, but sitting around all day doing nothing is even worse. In a world where AI takes over nearly every job, people will have to find meaningful ways to occupy their minds, and surely not everyone will be able to.

A tool created to solve all our problems might create another one that is unsolvable.
 
  • #285
JLowe said:
People think they hate their jobs, but sitting around all day doing nothing is even worse. In a world where AI takes over nearly every job, people will have to find meaningful ways to occupy their minds, and surely not everyone will be able to.

A tool created to solve all our problems might create another one that is unsolvable.

Not all jobs are better than pure freedom. Say, for example, spending 8 hours a day flipping burgers. What is so great about that? Maybe someone likes that job, maybe they enjoy the social aspect of being at work.

But if it wasn't necessary anymore, and you really think it is important that they don't get a free ride, or have too much freedom, you could force them to do something unnecessary for a meal ticket. How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people, otherwise no cake.
 
  • #286
dlgoff said:
To me, these are good reasons to fear AI.
Jarvis323 said:
But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.
It's nonsense, at least until we achieve a science-fantasy world that isn't on any predictable time horizon*.

First and foremost you are confusing physical automation with AI. AI is not the human-replacement robots you see in the movies, even in the most ambitious interpretations, it's human-replacement intelligence computers. Physical automation has been happening for 200 years, is independent of AI and does not have an end in no more physical, much less mental work.

And second, you are assuming these sentient robots and computers are accepted by humans as human replacements. Do you honestly think that people will accept robot baseball players, actors, etc. or that governments will accept fully AI-created engineering drawings? [edit] ...er...and the citizens will accept AI government? Again, this is beyond just about every science-fantasy I've seen except maybe The Matrix.

*Nor have I ever even seen this speculated about in science-fantasy media. Wall-E, maybe?
 
Last edited:
  • Like
Likes artis and PeterDonis
  • #287
Companies probably will not replace jobs with AI without first verifying that using AI is worth it. That may not take that long., weeks or months. This will be stressful for many who feel that they may be replaced even if they aren't. I suspect companies will carry out this verification surreptitiously.

The letter referred to above said that efforts should be made to not let AI take jobs that are fulfilling. Can a capitalistic economy be prevented from using AI for any job it sees fit?
JLowe said:
People think they hate their jobs, but sitting around all day doing nothing is even worse. In a world where AI takes over nearly every job, people will have to find meaningful ways to occupy their minds, and surely not everyone will be able to.

A tool created to solve all our problems might create another one that is unsolvable.
Another fear is that AI will further divide the population economically putting more wealth into fewer hands.
And, what happens when you have a lot of young men who are idle?

Jarvis323 said:
How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people otherwise no cake.
What has social media told us about conversations? Be careful what you talk about. The CCC fits into this solution as well as the Works Progress Administration (WPA) maintaining or replacing our infrastructure which was done during the great depression. These are useful and can be fulfilling too.
 
  • #288
Jarvis323 said:
AI displaces workers
Which AI is going to displace which workers? Can you give some specific examples?
 
  • #289
russ_watters said:
Nonsense, at least until we achieve a science-fantasy world that isn't on any predictable time horizon*.

First and foremost you are confusing physical automation with AI. AI is not the human-replacement robots you see in the movies, even in the most ambitious interpretations, it's human-replacement intelligence computers. Physical automation has been happening for 200 years, is independent of AI and does not have an end in no more physical(much less mental) work.

And second, you are assuming these sentient robots and computers are accepted by humans as human replacements. Do you honestly think that people will accept robot baseball players, actors, etc. or that governments will accept fully AI-created engineering drawings? [edit] ...er...and the citizens will accept AI government? Again, this is beyond just about every science-fantasy I've seen except maybe The Matrix.

*Nor have I ever even seen this speculated about in science-fantasy media.

You're right that there will still be non-essential jobs that we may choose to do or to have humans do. But figuring out how that will work out seems essentially the same as figuring out how it would work if nobody had a job.
 
  • Skeptical
Likes russ_watters
  • #290
gleem said:
Additionally, Goldman Sachs has issued a report that projects 300M jobs worldwide could be replaced with the US Isreal, Sweden, Hong Kong, and Japan most likely to be affected. If you are in HS or college just entering the workforce you have to make some decisions. Are there enough jobs in the near term to replace those that have been eliminated?
Only 300M? Over what timeframe?
[additional quote from the article] "Of U.S. workers expected to be affected, for instance, 25% to 50% of their workload can be replaced,” the report says.
Again, that's it? Under that criteria I would have assumed that over the course of a 40 year career, every white collar job is rendered at least 25% obsolete, and that this has been true for a hundred years or more. I took a paper/pencil mechanical drafting course in high school in like 1993. I've never met a current mechanical drafter and my company doesn't have any employees with the title "Drafter" (now CAD), and hasn't since I joined 15 years ago.
 
  • #291
gleem said:
Companies probably will not replace jobs with AI without first verifying that using AI is worth it. That may not take that long., weeks or months. This will be stressful for many who feel that they may be replaced even if they aren't. I suspect companies will carry out this verification surreptitiously.
Was that a response to my question about timeframe? What's with this cloak-and-dagger stuff? When has that ever been a thing? These things are never a secret nor are they ever announced ahead of time (nor do they need to be). They just happen.

Since I'm using movies for examples here's a non-fiction example of how it works in real life: "Hidden Figures". In the 1960s a "Computer" was a person with a calculator and the "Computers" were a room full of them. When NASA installed a "digital computer" it was not a secret/mystery to the "Computers" what was to become of them.
 
  • Like
Likes Borg and dlgoff
  • #292
PeterDonis said:
Which AI is going to displace which workers? Can you give some specific examples?
[sigh] What really annoys me about this thread/these discussions here and in the public is that they aren't even fantasy, they are pre-fantasy. Speculation about fantasy without actually developing the fantasy. What, Dave, what's going to happen?
 
  • Like
Likes OCR and PeterDonis
  • #293
Jarvis323 said:
Not all jobs are better than pure freedom. Say, for example, spending 8 hours a day flipping burgers. What is so great about that? Maybe someone likes that job, maybe they enjoy the social aspect of being at work.

But if it wasn't necessary anymore, and you really think it is important that they don't get a free ride, or have too much freedom, you could force them to do something unnecessary for a meal ticket. How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people, otherwise no cake.
It's not about forcing someone to do something unnecessary. It's about what will happen to people when they are no longer needed for anything and are being spoon fed by the machines.

And there's no reason to assume a horde of useless, bored 20 year olds wouldn't get drunk and tear the whole thing down anyway.
 
  • #294
Jarvis323 said:
You're right that there will still be non-essential jobs that we may choose to do or to have humans do. But figuring out how that will work out seems essentially the same as figuring out how it would work if nobody had a job.
You didn't actually respond to either point I made.
 
  • #295
russ_watters said:
First and foremost you are confusing physical automation with AI.

How so?

russ_watters said:
AI is not the human-replacement robots you see in the movies, even in the most ambitious interpretations, it's human-replacement intelligence computers.

This is just semantics.

russ_watters said:
Physical automation has been happening for 200 years, is independent of AI and does not have an end in no more physical, much less mental work.

I am having trouble getting your point. Are you saying that since we've gone 200 years with incremental advancements in automation without reaching a point that it can replace all of our jobs, if it could happen, it should have happened by now?

russ_watters said:
And second, you are assuming these sentient robots and computers are accepted by humans as human replacements. Do you honestly think that people will accept robot baseball players, actors, etc.

You're right, but again, are these sorts of jobs enough by themselves? Not everyone can be a celebrity for a living.

russ_watters said:
or that governments will accept fully AI-created engineering drawings?

Yes, I think it should be on the easier side for AI to do this.

russ_watters said:
[edit] ...er...and the citizens will accept AI government? Again, this is beyond just about every science-fantasy I've seen except maybe The Matrix.

Replaceable by AI, yes. More efficient than humans, yes. Accepted by humans? I don't know. Will we always have a choice? I don't know.
 
Last edited:
  • #296
Jarvis323 said:
Not all jobs are better than pure freedom. Say, for example, spending 8 hours a day flipping burgers. What is so great about that? Maybe someone likes that job, maybe they enjoy the social aspect of being at work.

But if it wasn't necessary anymore, and you really think it is important that they don't get a free ride, or have too much freedom, you could force them to do something unnecessary for a meal ticket. How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people, otherwise no cake.
Hmm. They could develop elaborate rapidly changing social codes and punish those who fail to keep up. They could sue each other for absurd reasons. They could build machines capable of destroying all life on Earth. They could amass extensive collections of PEZ dispensers.

It's a good thing we have jobs so that people don't do such things.
 
  • #297
Here is the Goldman Sachs report. I have not yet read it.

https://www.key4biz.it/wp-content/u...ligence-on-Economic-Growth-Briggs_Kodnani.pdf

An earlier (2018) report forecast 400 M jobs replace by 2030. This just popped up in a Time editorial by Eliezer Yudkowsky https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

I do not recall any expert in AI trying to arouse a sense of urgency about stopping AI development. He is not worried about the economic or social impact of AI but states that we do not know what we are playing with and that alone should prevent us from proceeding until we do know.

https://www.msn.com/en-us/money/oth...n&cvid=f30c803c396b4c20a073016b01627d6e&ei=63
 
  • Like
Likes russ_watters
  • #298
Thanks for being responsive...
Jarvis323 said:
How so?

This is just semantics.

I am having trouble getting your point.
?? Are you saying you don't see the difference between a physical job and a mental one? Engineering vs basketball? If we code a piece of software that can replace an engineer that doesn't mean we'll be able to build a robot to play basketball. Or vice versa: a basketball-playing robot wouldn't necessarily qualify as AI. So the idea that AI could replace all jobs is wrong for the first reason because AI can't replace most physical jobs at all, because they are completely separate things. So in other words, your belief that AI can replace all jobs including physical ones must mean you are wrongly combining physical and mental jobs -- AI and robots.
Are you saying that since we've gone 200 years with incremental advancements in automation without reaching a point that it can replace all of our jobs if it could happen, it should have happened by now?
Not exactly. I'm saying most physical and mental jobs have already been replaced. But new jobs are always created. Thus there is no reason to believe there will be a point where we can't think of a job for humans to do.
You're right, but again, are these sorts of jobs enough by themselves? Not everyone can be a celebrity for a living.
To be frank, I think you lack imagination on this issue (which is ironic because I also think you are using that lack of imagination as your inspiration for your fear -- like fear of the dark). Humans are exceptionally good at thinking of things they'd be willing to pay someone else to do. So much so that there's rarely been a time when jobs available have been wildly out of alignment with job-seekers, even during the various phases of the industrial revolution (except on a local level). There are a ton of jobs that even if we could automate we will choose not to because being human matters in these jobs. There's a ton of performance jobs, but they are but one of a legion of examples that will be difficult if not impossible to replace. Any job where a human emotion matters (psychologists/counselors), human judgement (government, charity work), human interaction (teachers, police) has to be done by humans. This can't change until/unless we can no longer tell androids from humans, which is to say, likely never.
Yes, I think it should be on the easier side for AI to do this.
There's no chance. I don't know what you are thinking/why, but governments are an authority and engineers who submit drawings for permit are recognized and tested experts with liability for mistakes. You can't replace either side of that with an AI unless we reach a point far off in the future where sentient android robots are accepted as fully equal to humans (like Data from Star Trek). Can you explain your understanding/thought process? It feels very superficial - like, 'an AI is intellectually capable of reviewing a drawing, so it will happen'.
Replaceable by AI, yes. More efficient than humans, yes. Accepted by humans? I don't know. Will we always have a choice? I don't know.
That really is the stuff of far-off fantasy, with little grounds in reality of what we have/know today, either for what AI is capable of or what humans are needed for.
 
  • #299
Point of order, here:
gleem said:
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?

An update of GPT4, GPT4.5, is expected around Oct of this year.
Are you saying ChatGP qualifies as AI? I know that's what the developers claim. Do you agree? I don't. I don't even think it qualifies under the more common weaker definitions of AI, much less the stronger ones...unless my thermostat is also AI. I see a massive disconnect between the fantasy-land fears being described in this thread and the actual status of claimed AI.
 
  • #300
russ_watters said:
Are you saying ChatGP qualifies as AI? I know that's what the developers claim. Do you agree? I don't.
One could argue that ChatGPT does have at least one characteristic of human intelligence (if we allow that word to be used for this characteristic by courtesy), namely, making confident, authoritative-sounding pronouncements that are false.

(One could even argue that this kind of "AI" could indeed displace some human jobs, since there are at least some human jobs where that is basically the job description.)

But I don't think that's the kind of "AI" that is being described as inspiring fear in this discussion.
 
  • Like
Likes OCR and russ_watters

Similar threads

Back
Top