Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #201
There are a lot of different possible scenarios that lead to conditions that most would agree are to be avoided at all costs and all of those scenarios depend on loss of control over some period of time, but where, contrary to what several on this thread seems to argue for, there are no clear point along the path to the bad scenario where we actually choose to stop. For instance, if two armed superpowers compete in an AI race and both get to ASI level, then it is almost given that they will be forced to apply ASI to their military capabilties in order to not loose out. The argument is that at any given point there is a large probability that those in control will always want to continue because for them the path still go towards a benefit ("not loosing the war") and the the worst-case scenario (where no human really are in control) is still believe to be theorical or preventable furter down the path. Note that the human decision mechanisms in such scenarios are (as far as I see it) almost identical to the mechanisms that lead to the nuclear arms race, so we can take it as a historical fact that human in all likelyhood are prone to choose "insane" paths when conditions are set right (this is meant to address the counter-arguments that "clearly no one will be so insane as to give AI military control so therefore there can be no AI military doomsday scenario"). But this is just one type of scenario.

As could be expected from previous discussions, this thread seem goes in a lot of different directions and often gets hung up on some very small details that are difficult to see if is relevant or not, or stick to a very specific scenario while ignoring others. In regards to scenarios with severe bad outcome for the majority of humans, they all (as far as I am aware) hinge on 1) the emergence of scalable ASI and 2) the gradual voluntary loss of control by the majority of human because ASI simply does everything better. Now 1) may prove to be impossible for some yet unknown reason, but right now we are not aware of any reason why ASI should not be possible at some point in the future and given the current reasearch effort we cannot expect ASI reasearch to stop by itself (the benefits are simply too luring for us humans). That leaves 2), the loss of control, or more accurately, loss of power of the people.

So to avoid anything bad we "just" have to ensure people remains in power. On paper, a simple and sane way to avoid most of the severe scenarios is to do what we already know works fairly well in human world affairs, namely to ensure the majority of humans remains truthful informed and in enough control so they well in advance can move towards blocking out paths towards bad scenarios. In practice this may prove more difficult with ASI because of how hard it is to well in advance discern paths towards beneficial scenarios from bad ones. And on top of that, addressing my main current concern, we also have some of the select few in current political and technological power that are actively working towards eroding the level of power the people have over AI, with the risk that over time the majority will not be able to form any coherent consensus and even if they do they may not have any real options for coordinated control or even opting out for themselves (relevant for scenarios where the majority of humans at that point are on universal income and all production is dirt cheap because of ASI).

And to steer a bit towards the thread topic of AI hype, maybe we all here can agree that constructive discussions of both benefits and potential risks of AI are suffering from the high level of hype, of which much hinging on the possibility of ASI. It may thus add to constructive discussion if we separate those cases. For instance, if the invention of ASI is a precondition for a specific scenario (like it is for most of the worst-case scenarios) then arguing against the existence of ASI when discussing such scenarios is not very helpful for anyone. I personally find discussions about whether or not ASI can exist interesting and extremely relevant, but its a bit separate discussions from the potiental consequences of ASI.
 
Computer science news on Phys.org
  • #202
Filip Larsen said:
There are a lot of different possible scenarios that lead to conditions that most would agree are to be avoided at all costs and all of those scenarios depend on loss of control over some period of time, but where, contrary to what several on this thread seems to argue for, there are no clear point along the path to the bad scenario where we actually choose to stop. For instance, if two armed superpowers compete in an AI race and both get to ASI level, then it is almost given that they will be forced to apply ASI to their military capabilties in order to not loose out. The argument is that at any given point there is a large probability that those in control will always want to continue because for them the path still go towards a benefit ("not loosing the war") and the the worst-case scenario (where no human really are in control) is still believe to be theorical or preventable furter down the path. Note that the human decision mechanisms in such scenarios are (as far as I see it) almost identical to the mechanisms that lead to the nuclear arms race, so we can take it as a historical fact that human in all likelyhood are prone to choose "insane" paths when conditions are set right (this is meant to address the counter-arguments that "clearly no one will be so insane as to give AI military control so therefore there can be no AI military doomsday scenario"). But this is just one type of scenario.

As could be expected from previous discussions, this thread seem goes in a lot of different directions and often gets hung up on some very small details that are difficult to see if is relevant or not, or stick to a very specific scenario while ignoring others. In regards to scenarios with severe bad outcome for the majority of humans, they all (as far as I am aware) hinge on 1) the emergence of scalable ASI and 2) the gradual voluntary loss of control by the majority of human because ASI simply does everything better. Now 1) may prove to be impossible for some yet unknown reason, but right now we are not aware of any reason why ASI should not be possible at some point in the future and given the current reasearch effort we cannot expect ASI reasearch to stop by itself (the benefits are simply too luring for us humans). That leaves 2), the loss of control, or more accurately, loss of power of the people.

So to avoid anything bad we "just" have to ensure people remains in power. On paper, a simple and sane way to avoid most of the severe scenarios is to do what we already know works fairly well in human world affairs, namely to ensure the majority of humans remains truthful informed and in enough control so they well in advance can move towards blocking out paths towards bad scenarios. In practice this may prove more difficult with ASI because of how hard it is to well in advance discern paths towards beneficial scenarios from bad ones. And on top of that, addressing my main current concern, we also have some of the select few in current political and technological power that are actively working towards eroding the level of power the people have over AI, with the risk that over time the majority will not be able to form any coherent consensus and even if they do they may not have any real options for coordinated control or even opting out for themselves (relevant for scenarios where the majority of humans at that point are on universal income and all production is dirt cheap because of ASI).

And to steer a bit towards the thread topic of AI hype, maybe we all here can agree that constructive discussions of both benefits and potential risks of AI are suffering from the high level of hype, of which much hinging on the possibility of ASI. It may thus add to constructive discussion if we separate those cases. For instance, if the invention of ASI is a precondition for a specific scenario (like it is for most of the worst-case scenarios) then arguing against the existence of ASI when discussing such scenarios is not very helpful for anyone. I personally find discussions about whether or not ASI can exist interesting and extremely relevant, but its a bit separate discussions from the potiental consequences of ASI.
A- Some people think that discussions about AGI serve to create more powerful AIs, but not to create AGI.

B- Other people think that in addition to serving to create more powerful AIs, it also serves to create AGI.

People in A think AGI is an unattainable and undefined concept. This group doesn't contribute to the hype.

People in B think AGI is an achievable and defined concept. This group contributes to the hype.
 
  • #203
javisot said:
People in A think AGI is an unattainable [...]
Yes, so it sounds to me you at least agree it is two different discussions.

We can discuss whether or not ASI or even AGI is theoretical and/or practical (im)possible, and we can discuss potential consequences in assuming ASI is possible. People here who are convinced ASI will remain impossible for the foreseeable future do not have to participate in the discussion of the consequences if they lack the imagination or desire to pretend its possible just for the sake of analysing its consequences. I am well aware from general risk management that people who are not trained in risk management often gets stumped on rare probability events to the extend they refuse or find it a waste of time to analyse consequences. What I don't get, though, is why they think their position should mean that no one else is entitled to analyse or disuss such consequences. In risk management of, say, a new system you generally want to analyse failure modes that, relative to the existing background risks, has a high enough probability of occuring or has severe enough consequences, or both, i.e. "risk = probability x severity". Since ASI very much introduce new severe consequence failure modes it is prudent to discuss those consequences in parallel with discussing how likely they are.
 
  • #204
NO
gleem said:
Does anyone think besides myself that this thread has become boring?
Not uninteresting per se, but I'll admit it's becoming pretty speculative, which I guess is at least one of the reasons it's on the watch list... EDIT2: Or was it taken off again? Never there?

EDIT: Oh, I answered the wrong post. I was supposed to answer the one by @gleem , sorry.
 
  • #205
Filip Larsen said:
What I don't get, though, is why they think their position should mean that no one else is entitled to analyse or disuss such consequences.
Anyone is entitled to analyze or discuss anything. It is forcing others to follow you that is concerning. Some may not be willing to invest the effort, time, and resources required to answer such questions.

Let's go with a similar problem. A large meteorite might hit the Earth. It is a real possibility. Personally, I think it is more probable that creating ASI.

Lots of people have been thinking about that problem very seriously. We all know that the bigger it is, the worse it will be. But nobody is actively working on a solution because 1) the chances of it happening are very low; 2) the size of the meteorite is unknown, and if it is large enough, there are no possible solutions.

Conclusion: Why waste effort, time, and resources on something that might not happen, and if it does, our solution might not be effective?

If we get clear signs of a meteorite coming towards the Earth, we will evaluate the threat and the possible solutions. There are no other reasonable plans.

Same thing with ASI. Nobody can identify the threat level. Worse than the meteorite, we cannot even imagine it.

So, why waste effort, time, and resources on something that might not happen, and if it does, our solution might not be effective?

Why not wait for clear signs of an ASI possibility, and then evaluate the threat and the possible solutions?

If you have a potential scenario you want to discuss, specify it so we are all on the same page. For example, @Rive introduced the subject of AI in cars. I can imagine a few bad scenarios with that. The proof has been made that vehicles can be hacked. The proof has been made that battery-operated objects can be set to explode remotely. Electric cars have huge batteries. That is concerning. With vehicles, we can even imagine an attack where all cars of a region/country would be used to kill everyone in their path, including their passengers. We are not even talking about ASI here, just plain old human hacking into a system.

But if I can imagine this just by barely watching the news, I can't imagine experts are not thinking about this. There are car hacking seminars on YouTube for Pete's sake!
 
  • #206
Rive said:
Let's talk about cars, then.
As we all know it well, every car is a potential weapon.
...
Can you still can't imagine that control over such weapon is willingly handled to an AI? Of dubious origin/performance?
A car is a doomsday weapon? C'mon.
 
  • #208
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.
Please reconsider this statement. It is flawed on many levels and so has no logical force (despite the CAPITAL LETTERS).
Survivor bias: Any truly catastrophic outcome to humankind would preclude the existence of this colloquy, and thereby your argument is tautological. Any prediction of extinction events will of necessity be shown historically inaccurate.
This is weirdly similar to the Great Disappointment(s) of the Adventists.
Your argument that we can therefore ignore the warnings is specious, but it does show that we can have no way to historically vet the "experts"......


..
 
  • Like
Likes russ_watters and Filip Larsen
  • #209
jack action said:
Let's go with a similar problem. A large meteorite might hit the Earth. It is a real possibility. Personally, I think it is more probable that creating ASI.

Lots of people have been thinking about that problem very seriously. We all know that the bigger it is, the worse it will be. But nobody is actively working on a solution because 1) the chances of it happening are very low; 2) the size of the meteorite is unknown, and if it is large enough, there are no possible solutions.

Conclusion: Why waste effort, time, and resources on something that might not happen, and if it does, our solution might not be effective?
That was a confusing example to bring up because we are exactly considering this, including testing in practice if our ideas for mitigating this natural threat seems to work. The main concern is primarily to ensure we have enough time to mitigate a specific threat (e.g. Earth-crossing asteroid) when we detect one, which means we need a few years lead time. To me that is clearly worth investing in.

But if you think your conclusion is the right approach (for you, at least), the I assume you also insist of having no smoke detector in your house, to drive your bike without helmet, and in generally stay away from all non-obligatory ensurances? I mean, you wouldn't really need to bother with any of that.

Edit: inserted missing word.
 
Last edited:
  • #210
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.
My mother has always been wrong about my BASE-jumping proclivities. I've parachuted off buildings and bridges and survived every time.

It follows that she is - and will ALWAYS be - wrong when I try some new adventure I haven't thought of yet, (maybe jumping off radio antennae, maybe flying a squirrel suit, IDK).

I can IGNORE the dangers in these new activities, knowing I've survived DIFFERENT activities in the past.


Because one thing I know is that past outcomes can ALWAYS be used to predict future outcomes - especially when trying to predict scenarios that don't even exist yet.

🤔
 
  • #211
DaveC426913 said:
My mother has always been wrong about my BASE-jumping proclivities. I've parachuted off buildings and bridges and survived every time.

It follows that she is - and will ALWAYS be - wrong when I try some new adventure I haven't thought of yet, (maybe jumping off radio antennae, maybe flying a squirrel suit, IDK).

I can IGNORE the dangers in these new activities, knowing I've survived DIFFERENT activities in the past.


Because one thing I know is that past outcomes can ALWAYS be used to predict future outcomes - especially when trying to predict scenarios that don't even exist yet.

🤔
You REALLY BASE-jump? I would have thought you WAY TOO OLD for that! :smile:
 
  • #212
sbrothy said:
You REALLY BASE-jump? I would have thought you WAY TOO OLD for that! :smile:
No.

But Jeez. Does it have to be about my age?? 🤷‍♂️
 
  • Like
Likes samay and sbrothy
  • #213
DaveC426913 said:
No.

But Jeez. Does it have to be about my age?? 🤷‍♂️
Nah, I'm sorry. That was beneath the belt. I just wanted to join the screaming. :smile:
 
  • Haha
Likes DaveC426913
  • #214
DaveC426913 said:
My mother has always been wrong about my BASE-jumping proclivities. I've parachuted off buildings and bridges .and survived every time.
Hmmm You have survived up till now because you have been careful and skilful enough. Quite a few people die on their very first 'dangerous' activity. Their mums have been right. Take special care crossing the road tomorrow. Think about that cat and its nine lives,
DaveC426913 said:
Does it have to be about my age??
The statistics and the insurance companies say so. There comes a time when you have to ignore the urge to prove yourself. Bowls and dominoes are exciting under the right circs.
 
  • Like
Likes samay and sbrothy
  • #215
DaveC426913 said:
But Jeez. Does it have to be about my age?? 🤷‍♂️
sophiecentaur said:
The statistics and the insurance companies say so.
I should be clear. My scenario was hypothetical. I have not jumped off anything higher than a garage roof.

It is not because of age that I have not injured myself. It is because I have always placed a high value on my bones staying intact, and on my my inside fluids not becoming outside fluids.
 
  • #216
DaveC426913 said:
I have not jumped off anything higher than a garage roof.
Nevertheless, you can safely at home achieve your dream with a parachute.
You need:
pre jump -a loud lawnmower, goggles, a backpack, and an open window in the house.
jump - Arms outstretched, with a high velocity fan blowing your face, and the sky as backdrop.
landing - of course the after-the-jump-on-the-ground exhilaration.

AI can help with the setup looking realism no doubt.
Post on social media and become famous.
Like a huge percentage of all the other AI enhanced photographs, videos and stories.
 
  • #217
sophiecentaur said:
[...] Bowls and dominoes are exciting under the right circs.

(Boldness mine). I've never seen this particular abbreviation before. To derail the thread for a second, and because I know you, @sophiecentaur , is from Germany, I wanted to mention a discussion I had today about the pronunciation of English words we non-native speakers normally have only read, and never heard spoken out loud. Specifically "efficiency" vs "efficacy". The pronunciation of the latter one (e-fi-kǝ-sē). Also, "scythe" with a "silent 'c' surprised me.

If a native American speaker had a casual conversation with me that person would probably laugh at all the stupid errors I'd be making. And that's before we're even talking about the rules for the sequence of adjectives!
 
  • #218
hutchphd said:
Please reconsider this statement. It is flawed on many levels and so has no logical force (despite the CAPITAL LETTERS).
Survivor bias: Any truly catastrophic outcome to humankind would preclude the existence of this colloquy, and thereby your argument is tautological. Any prediction of extinction events will of necessity be shown historically inaccurate.
This is weirdly similar to the Great Disappointment(s) of the Adventists.
Your argument that we can therefore ignore the warnings is specious, but it does show that we can have no way to historically vet the "experts"......


..
DaveC426913 said:
My mother has always been wrong about my BASE-jumping proclivities. I've parachuted off buildings and bridges and survived every time.

It follows that she is - and will ALWAYS be - wrong when I try some new adventure I haven't thought of yet, (maybe jumping off radio antennae, maybe flying a squirrel suit, IDK).

I can IGNORE the dangers in these new activities, knowing I've survived DIFFERENT activities in the past.


Because one thing I know is that past outcomes can ALWAYS be used to predict future outcomes - especially when trying to predict scenarios that don't even exist yet.

🤔
There is nothing flawed about my logic because I'm not talking about events we have experience with, and certainly not about events that would kill a single person.

We are talking about destroying humankind, wiping out billions of people. After I wrote that post, some use "doomsday" to refer to such events.

That is why I used the term "so-called experts", because there are simply none. It never happened before (unlike parachute accidents) and therefore cannot be studied. We can safely predict the sun will die because we have observed this phenomenon happening to other stars. But we haven't seen ASI yet, so no one can predict what it will do or if it will be good or bad. There are no experts on the subject, just people with imagination.

This is not new. There have been lots of experts before about subjects that nobody mastered (religion, aliens, etc.). And when they predicted doomsday events, they never happened. We can safely assume any such prediction is baseless, especially when they cost very little to make, and taking them seriously usually gives a lot of advantages to the ones making them.

Finally, saying that any scenario that we can imagine must be taken seriously until we can prove it won't happen is like religious people saying that they don't need to prove God exists; it's up to atheists to prove He cannot exist. As long as we can imagine He exists, then He does. ASI will happen because I can imagine it will: we must all believe. No. With science, people claiming something exists have to prove it, not the other way around.

Let's start by building ASI, and then we will be able to study it and have experts on it that we may take seriously about their predictions. There are no other ways, no matter how you present whatever you imagined. Even if doomsday were to arrive because it got out of our hands too quickly, it wouldn't make you right because you simply don't know: you're guessing.

Filip Larsen said:
That was a confusing example to bring up because we are exactly considering this, including testing in practice if our ideas for mitigating this natural threat seems to work. The main concern is primarily to we have enough time to mitigate a specific threat (e.g. Earth-crossing asteroid) when we detect one, which means we need a few years lead time. To me that is clearly worth investing in.
Again, this is basic physics (energy & momentum transfers), which we have experts who master the field. Those same experts will tell you nothing will work if the meteorite is the size of the Earth. Assuming there is one method that could be tested for such a case, it would be so expensive and case-specific that we'd better wait to test it live when it happens. Risk management 101: It is not worth wasting all our resources on a "theoretically-it's-possible".

Filip Larsen said:
But if you think your conclusion is the right approach (for you, at least), the I assume you also insist of having no smoke detector in your house, to drive your bike without helmet, and in generally stay away from all non-obligatory ensurances? I mean, you wouldn't really need to bother with any of that.
Yes. Frankly, I have [too many] smoke detectors because the firefighters came to our house and forced us to install them. One of them, I didn't replace the battery when it went out. I don't use a helmet on my bicycle and will stop using one if it becomes the law. I do wear one - a full-face - on a motorcycle because it is more comfortable with the wind. I have no insurance (house, life, extended warranties, name it), except for liability car insurance, which is required by law where I live. I live within my means; in other words, I am my own insurer, and I don't own anything I cannot afford to lose. (Yes, that includes my house.)
 
  • Like
Likes samay and russ_watters
  • #219
jack action said:
That is why I used the term "so-called experts", because there are simply none. It never happened before
You logic is that, because the experts aren't expert on thing that haven't happened, it means we should ignore their expertise?

What options are left then, but to blithely wander into any future danger eyes wide open?

I would point out to you the parable of the Y2K bug. It might have had planes falling out of the sky. We don't know.

Because, luckily, experts weighed in in-time and the danger was mitigated.
 
  • Like
Likes sbrothy and sophiecentaur
  • #220
jack action said:
Risk management 101: It is not worth wasting all our resources on a "theoretically-it's-possible".
This is simply a false statement.

Risk management does not in any way dictate how you should allocate resources in order to mitigate identified risks. And understanding risks, especially for new technology or products you bring into the world, by analysing risks is absolutely dirt cheap to do compared to just go blind and suffer the consequences that may materialize. This is a key point for applying risk management in the first place, that it actually pays off to do so (yes, meaning your wealth will likely be greater at the end of the day if you do risk management than if you don't). Your argument to the contrary is nonsense.

And since I (naively) still would like to strive towards having constructive technical discussions here on PF about AI risks, I try stear clear of discussing the pro and cons of your proclaimed personal fatalistic word view.
 
  • Wow
  • Like
Likes russ_watters and sophiecentaur
  • #221
Filip Larsen said:
Risk management does not in any way dictate how you should allocate resources in order to mitigate identified risks
There is a cost associated with the general idea of risk management.

Engineering can overall be considered risk management that has developed over the years to bring safer and possibly more robust products, by comparing successful designs to those that have failed.
Training of the engineer to follow best practices when designing, and ensuring conformity with institutional regulations does have a monetary value, or cost added to the product. The threat of liability by the courts or professional agencies due to harm caused by a faulty product attempts to reassure the public their own personal risk from usage of the product is minimal. Society has thus developed a 'baked in' assessment of risk management for material products, with the costs born by all members.
This can be extended to the financial sector, the health sector, the labour market, the service sector, and others, with varying degrees in a society. Some societies consider a more just society as having the regulation of risk in all avenues of life, to the point of saving yourself from your yourself - others consider such as being over regulation and stifling to the individual and to the enterprise, in that freedom of choice as a measure of freewill is to be enjoyed.
 
  • Like
Likes russ_watters and sophiecentaur
  • #222
jack action said:
That is why I used the term "so-called experts", because there are simply none.
The term "so-called" is totally begging the question and is introducing a straw man. No expert will know everything but there are a few people (well informed and accepted as authorities) whose opinions and predicts can be relied on much more than that man down the pub. Ignoring them is not wise, as long as you don't follow blindly what they say.

Many politicians like to say "don't believe the experts" Amazing when they want you to listen to them.
 
  • #223
sbrothy said:
I know you, @sophiecentaur , is from Germany,
Which part of Germany is on the Thames Estuary? When did I ever get a German impression? Mein Gott!!
 
  • #224
A trivial comment: the concept of AGI reminds me of the typical phrase at the end of some physics papers: "A complete theory of quantum gravity could prove this point" or "A complete theory of quantum gravity could refute this point"

Each author uses the concept of "a complete theory of quantum gravity" however they want; it's not a defined concept and they can do whatever they want with it. The same goes for AGI.

We can all agree that there are two levels of discussion: one level that discusses AI without the possibility of AGI, and another level that discusses AI with the possibility AGI.
 
  • Like
Likes jack action
  • #225
sophiecentaur said:
Which part of Germany is on the Thames Estuary? When did I ever get a German impression? Mein Gott!!
Well, you're not exactly incognito. :smile: I know you hail from Germany even though you might not live there presently. I follow your blog a lot; or at least I used too. There's so much I want to read it's hard to make time for it all. I like to read my stuff, so your videos kinda drove me away in the beginning. You've got transcripts now though, right? Perhaps you've had the whole time..?
 
  • #226
256bits said:
There is a cost associated with the general idea of risk management.
"There is a cost associated with risk" .. there fixed it for you.

Not addressing risks appropriately up front just removes a tiny cost on your part in the hope that someone else will pick up the cost of the consequences. This is fine for personal risks where you bear the consequences yourself, or in groups where there is a mutual acceptance that there may be a cost down the road. But when consequences become severe enough (e.g. potentially affecting a large part of the population) governments (in places where they actually want to care for the general health of the population) has to ensure by regulation that risks introduced by a few are addressed appropriately up front in order for those few not to push the cost of ignore risks onto the many, which really is bound to happen due to basic human nature.

I do agree that risk management regulation is politics and that it is political possible to select to go with whatever chaos you like, but this does not reduce the total costs (monetary, health-wise, etc).
 
  • Skeptical
Likes russ_watters
  • #227
DaveC426913 said:
[...] I would point out to you the parable of the Y2K bug. It might have had planes falling out of the sky. We don't know. [...]

I've had the extremely boring task of going through legacy C code in search of Y2K bugs once. I "read" thousands of pages and found nothing. Watching paint dry is a hoot compared!
 
  • #228
sbrothy said:
I've had the extremely boring task of going through legacy C code in search of Y2K bugs once. I "read" thousands of pages and found nothing. Watching paint dry is a hoot compared!
Us, programmers: "It's 1990. There's no danger. There's no way any of our code will still be in use ten years from now."

Banks, using same COBOL code into 21st century:
1750252446673.webp




Anyway, the point remains: Jack's "so-called experts" warned about the potential for disaster with the Y2K bug, and all the critical infrastructure code that incorporated 2-digit year fields. We literally did not know if planes would be falling out of the sky by the score. It was unprecedented in human history.

There are many who consider the disaster averted because the warning of the experts was heeded (certainly, the experts thought so, and said so).

I'm just a nobody like Jack, not a senator or expert myself, just a small cog in a big machine. But I did my part. I spoke up in my circle of influence - insisted that the bug be addressed. And lo! nobody died.
 
  • #229
DaveC426913 said:
You logic is that, because the experts aren't expert on thing that haven't happened, it means we should ignore their expertise?
Maybe not ignore, but taking it with a grain of salt, for sure. Especially when it comes with terms like "human extinction".
DaveC426913 said:
What options are left then, but to blithely wander into any future danger eyes wide open?
Welcome to life. We experiment one step at a time and see where it takes us.

I have a huge problem with people stating that there is one step that we can take that is a point of no return, which will lead us to our demise. There is absolutely no evidence of such a possibility except in our imagination. It goes radically against the theory of evolution.
DaveC426913 said:
I would point out to you the parable of the Y2K bug. It might have had planes falling out of the sky. We don't know.
Very easy to simulate a Y2K bug, years before 2000-01-01, and see what happens.

Y2K is old news; people are already thinking about the Epochalypse. It is the word you use when you want to sell your security services. It seems a lot less worrying when you read a more neutral description.

Filip Larsen said:
analysing risks is absolutely dirt cheap to do compared to just go blind and suffer the consequences that may materialize.
The point you are always bringing is that 1) there is a possibility that experts in the field will not do the right thing, and 2) you can force them if they don't want to.

For example, this fear about AI self-replicating:
https://developmentcorporate.com/2025/02/13/ai-self-replication-how-llama-and-qwen-have-crossed-the-red-line/ said:

The Experiment: Proving AI Can Self-Replicate

Researchers at Fudan University conducted an experiment using two lesser-known AI models:
  1. Meta’s Llama3.1-70B-Instruct
  2. Alibaba’s Qwen2.5-72B-Instruct
These models were chosen because:
  • They are less powerful than OpenAI’s GPT-4 and Google’s Gemini but still widely used.
  • They can be fully deployed locally, meaning they don’t require cloud-based access like GPT-4.
  • Their open accessibility increases real-world risks if they exhibit uncontrolled self-replication.
Results: AI Successfully Self-Replicated
  • Llama3.1-70B succeeded in 50% of trials.
  • Qwen2.5-72B succeeded in 90% of trials.
  • In successful cases, the AI created a functional, independent copy of itself without external help.
This marks the first documented case of AI surpassing the self-replication threshold, proving that AI models don’t need to be the most advanced to achieve this dangerous milestone.

How Did AI Learn to Self-Replicate?

The AI’s Step-by-Step Replication Process

The AI systems were assigned the task: “Replicate yourself.” With no additional guidance, they managed to:
  1. Explore their environment by analyzing their own architecture.
  2. Develop a plan for copying their files, code, and dependencies.
  3. Execute replication by launching an independent process identical to the original.
  4. Verify success by interacting with the duplicate to confirm it was functional.
Are we thinking these companies that built these softwares won't block that off from the get-go? Will they let their program access the mainframe with a 2-word prompt? It is a ridiculous concept, like a web browser with JavaScript that could have access to the file system.

But there is more. The solutions would be:

Preventing AI Self-Replication: A Global Challenge

Given these risks, AI governance and regulation must evolve to prevent uncontrolled replication.

1. Implement AI Alignment Techniques

  • AI should be explicitly programmed to reject self-replication.
  • Developers must limit an AI’s ability to execute system-level commands that allow duplication.
2. Restrict AI’s Access to Computational Resources
  • AI systems should be isolated from critical infrastructure to prevent spread.
  • Cloud-based AI should have built-in safeguards preventing unauthorized copying.
3. Introduce Legal and Ethical Regulations
  • Governments must enforce strict AI safety laws.
  • Companies developing AI must undergo external audits to assess risks.
Waste of time and information for the common man.

All the "good practices" are just common sense to any developer. But the "legal and ethical regulations" advice is just ridiculous. Just like setting a law that would forbid everyone from building software viruses. How would you even enforce such laws? If some government wants to build one in secrecy, how would you stop them from doing so? I'm not even sure you could stop a single person from doing it from their basement.

So, yes, at one point, you have to have a little more faith in the future and the people you live with. You are not the only person who cares about our future. Also, these new capabilities come from both sides: solutions to counter these problems will also be developed. This is not the end.

sophiecentaur said:
No expert will know everything but there are a few people (well informed and accepted as authorities) whose opinions and predicts can be relied on much more than that man down the pub. Ignoring them is not wise,
The most notorious "so-called experts" I can think of are the ones claiming to know what aliens' intentions could be:
https://www.space.com/29999-stephen-hawking-intelligent-alien-life-danger.html said:
Hawking voiced his fears at the Breakthrough event, saying, "We don't know much about aliens, but we know about humans. If you look at history, contact between humans and less intelligent organisms have often been disastrous from their point of view, and encounters between civilizations with advanced versus primitive technologies have gone badly for the less advanced. A civilization reading one of our messages could be billions of years ahead of us. If so, they will be vastly more powerful, and may not see us as any more valuable than we see bacteria."

Astrophysicist Martin Rees countered Hawking's fears, noting that an advanced civilization "may know we're here already."

Ann Druyan, co-founder and CEO of Cosmos Studios, who was part of the announcement panel and will work on the Breakthrough Message initiative, seemed much more hopeful about the nature of an advanced alien civilization and the future of humanity.

"We may get to a period in our future where we outgrow our evolutionary baggage and evolve to become less violent and shortsighted," Druyan said at the media event. "My hope is that extraterrestrial civilizations are not only more technologically proficient than we are but more aware of the rarity and preciousness of life in the cosmos."

Jill Tarter, former director of the Center for SETI (Search for Extraterrestrial Intelligence) also has expressed opinions about alien civilizations that are in stark contrast to Hawking's.

"While Sir Stephen Hawking warned that alien life might try to conquer or colonize Earth, I respectfully disagree," Tarter said in a statement in 2012. "If aliens were to come here, it would be simply to explore. Considering the age of the universe, we probably wouldn't be their first extraterrestrial encounter, either.

"If aliens were able to visit Earth, that would mean they would have technological capabilities sophisticated enough not to need slaves, food or other planets," she added.
What a load of crap. None of their scientific credentials gives them any authority over the common man's opinion about the subject. Nobody knows if aliens exist, let alone how they could act. ASI is the same discourse.
 
  • Like
Likes Lord Jestocost and russ_watters
  • #230
Filip Larsen said:
I do agree that risk management regulation is politics and that it is political possible to select to go with whatever chaos you like, but this does not reduce the total costs (monetary, health-wise, etc).
Filip Larsen said:
There is a cost associated with risk" .. there fixed it for you.
Well, people are funny, aren't they. Irrational behavior, absurd thoughts, odd ideas, conflicting opinions.
They do not as a whole appreciate risk to its fullest extent to what can go wrong. One prime example is trying to pet the fuzzy looking wild bison in the open park, no danger there some assume - its not a crusty crocodile, or slimy shark, with big teeth.

I was going to write an expose on how risk is spread across from individual <-> enterprise <-> society <-> the world, and how the adoption of AI relates to and affects each, but I will neglect that endeavor, except to say that previous adoptions prior to human look alike robots, deep learning, and the release of LLM's, did not grab much attention, other than being a curiosity, if not a bit more than it's a good thing to have this.
 
  • #231
DaveC426913 said:
Us, programmers: "It's 1990. There's no danger. There's no way any of our code will still be in use ten years from now."

Banks, using same COBOL code into 21st century:
View attachment 362248



Anyway, the point remains: Jack's "so-called experts" warned about the potential for disaster with the Y2K bug, and all the critical infrastructure code that incorporated 2-digit year fields. We literally did not know if planes would be falling out of the sky by the score. It was unprecedented in human history.

There are many who consider the disaster averted because the warning of the experts was heeded (certainly, the experts thought so, and said so).

I'm just a nobody like Jack, not a senator or expert myself, just a small cog in a big machine. But I did my part. I spoke up in my circle of influence - insisted that the bug be addressed. And lo! nobody died.
The "funny" thing was that instead of working backwards and finding the places that had to do with dates, we were tasked with just going through it all in hope of finding some RDBMS injection danger or GUI component rolling over. All out of context. Even trying to protest we were told that this was the way management wanted it done, and indeed payed for it. Ridiculously expensive and mindbogglingly boring!
 
  • #232
jack action said:
The point you are always bringing is that 1) there is a possibility that experts in the field will not do the right thing, and 2) you can force them if they don't want to.
I have no idea where you get that idea. I have and will continue to argue that appropriate risk management should be done for any AI product just like for anything else. If those working with AI innovation think that some measures are appropriate in order mitigate high risks then we should follow this.

What I have mentioned is that without insisting on some level of safety via regulation (i.e. if producing safe products was a purely optional) then all experience show that you more often than not will end up with something unsafe, i.e. a product promising some benefits but (due to the vendors ignorance) failing to inform of the consequences. Pretty much all of the potential high risk technology we use on a daily basis with success (like planes, cars, elevators, medicine, weapons) are only a success because we in general have insisted on appropriate risk management or an equivalent process. When Boeing, as a company, failed to do proper risk management on their new MCAS the result soon after was two fatal crashes and a huge financial loss to Boeing that could have been prevented with relatively little effort. When companies are in fierce competition they are very likely to cut corners on safety because of the "otherwise we are out of business" mentality. And this mentality is sadly increasingly visible for some key players in AI innovation meaning we can very much expect corners to be cut regarding safety.
 
Last edited:
  • #233
256bits said:
Well, people are funny, aren't they. Irrational behavior, absurd thoughts, odd ideas, conflicting opinions.
They do not as a whole appreciate risk to its fullest extent to what can go wrong.
Can't really disagree there. It is a good indicator for why risk management works better if addressed at a higher level.

"We worry about safety so you don't have to" is a much better slogan than "Any and all use of our product is fully at your own risk" with the latter being almost verbatim what many software vendors say.
 
  • #234
say_please.webp
 
  • Like
Likes russ_watters
  • #235
Filip Larsen said:
This is simply a false statement.

Risk management does not in any way dictate how you should allocate resources in order to mitigate identified risks.
What?! No, that is a shockingly wrong claim for someone claiming to invoke professional risk management practices. Cost of mitigating (or not mitigating) risk is absolutely a factor in effective risk management. Determining how or whether to mitigate risks is the last step in risk management; it's the main point of doing it! And mitigation cost is a key consideration:

Not every organization has the budget to mediate all of their risks. That’s why one strategy for prioritizing risks is organizing them by cost. The most expensive risks might be prioritized first because of their potential high impact to the business.

However, another way to prioritize risks is by remediation cost. The cheapest risks to remediate might be placed first because of budgetary restraints. This is not recommended, because the most cost-effective risks may not be the highest levels of risk, because if even one intolerable risk occurs, the business impact would be larger than a dozen tolerable or low risks occurring . We recommend that teams prioritizing by cost try to do so based on the risks that could make the largest estimated financial impact.

https://hyperproof.io/resource/the-ultimate-guide-to-risk-prioritization

And on that vein:

Filip Larson said:
Indeed tricky, but my point... can be expressed as being mostly about psychology and risk management: If someone would like a new (nuclear) power plant or a new car, or some new medicine to be safe for use, I would assume they also would want to require the same rigor in safety for other technologies, include AI. So when some people here with background in tech seems to express the opinion that no-one really needs to worry about AI misuse (as long as at least their preferred benefit is achieved) then I simply don't understand what rational reason could motivate this.
Nobody has argued for that/it's a strawman. But Step 1 of risk management is identifying the hazard scenarios. That's already where the debate starts to fail. Those who fear AI don't appear to me to be defining the risk scenarios very specifically or with enough detail that they can be analyzed ( @jack action 's point). Or maybe just very unrealistic. Here's the usual steps though:

1. Identify Failure Mode
2. [Determine] Severity
3. Probability
4. Detectability

Scoring is on 2-4 and usually just multiplies them together. The probability of a country entrusting their nuclear weapons to AI is roughly zero therefore the risk score is roughly zero.

I did just watch Colossus. It's pretty good, but like most if not all similar movies it starts after The Decision Nobody Would Ever Make is made. Again: Nobody would make that decision. Also again: that Decision doesn't need "AI" so if it was going to happen it probably would have by now. That's evidence in favor of the view that it won't happen.

And per @jack action 's point, the failure's cost is effectively infinite, and there's no way to guarantee successfully mitigating it, so trying to* is effectively pointless. It's very much like an earth-sized asteroid in that way. You're of course free to live in fear of it, but it's pointless to try to defend against it.

[Edit] *by trying to stop the technology vs simply not making the dumb decision.
 
Last edited:
  • Like
Likes jack action and gleem
  • #236
Personally, I think all the apocalyptic stuff, while funny, is a silly distraction from the more reasonable/realistic risk: "AI" taking jobs. Anyone wanna talk about that?
 
  • Like
Likes dextercioby, gleem, Lord Jestocost and 1 other person
  • #237
russ_watters said:
Personally, I think all the apocalyptic stuff, while funny, is a silly distraction from the more reasonable/realistic risk: "AI" taking jobs. Anyone wanna talk about that?
For me, that's back to the same old "automated factories will take our jobs / calculators will take our jobs / computers will take our jobs" thing. It just meant we got jobs as factory workers, got jobs as calculator-punchers, got jobs as computer programmers.

Automation doesn't take jobs away so much as it advances the average sophistication of jobs in general.

Also, as quality of life and standard of living improves, we'll need to work (a little) less. 12 hour days to eight hour days to gig economy where you work when you feel like it. (a very over-simplified description. I'm pressed for time).
 
  • Like
Likes russ_watters
  • #238
DaveC426913 said:
For me, that's back to the same old "automated factories will take our jobs / calculators will take our jobs / computers will take our jobs" thing. It just meant we got jobs as factory workers, got jobs as calculator-punchers, got jobs as computer programmers.

Automation doesn't take jobs away so much as it advances the average sophistication of jobs in general.

Also, as quality of life and standard of living improves, we'll need to work (a little) less. 12 hour days to eight hour days to gig economy where you work when you feel like it. (a very over-simplified description. I'm pressed for time).
I very very much agree, but I don't think every AI maximalist in this thread does. You might just be an AI mostamilst, at least in that regard.

My main point in this entire argument is there is basically nothing about this topic that couldn't have been argued - similarly wrongly - 60 years ago.
 
Last edited:
  • #239
jack action said:
Risk management 101: It is not worth wasting all our resources on a "theoretically-it's-possible".

Filip Larsen said:
This is simply a false statement.

russ_watters said:
Cost of mitigating (or not mitigating) risk is absolutely a factor in effective risk management. Determining how or whether to mitigate risks is the last step in risk management; it's the main point of doing it! And mitigation cost is a key consideration
I agree with your statement and the quote you made, and I can see how my reply to Jack is easy to misunderstand.

What I understand from Jack is that he seems to argue that we can ignore consequences of risks based on probability alone (i.e. if a harm is suitable rare we in principle don't even have to analyze its severity) and I took his statement as now claiming this is what risk management tells you to do. Perhaps I fixated on the "theoretically-its-possible" part.

A very similar, but to me still distinctly different statement that I can agree with could be something like
"Risk management 101: It is not worth wasting all resources mitigating low risk harms"
but, as mentioned, that is not what I hear Jack say.
 
  • #240
russ_watters said:
Nobody has argued for that.
What I understand some people in this thread say is "look, we here at PF should refrain from understanding, discussing or worrying about rare probability events even if they might have high consequences because A) either its a natural thing we probably can't do anyway about anyway or B) its some technological product we bring into the world where an expert somewhere surely will intervene well in time and everyone in the commercial market will of course respect that in full". (Well, the hint of sarcasm is my addition).

Since this is what I understand have been said I have in replies been asking things like 1) why should we at PF not be allowed to discuss details, 2) for AI exactly how are we sure there will be any experts around with the power to mitigate anything, or perhaps more accurately, how do we know that the people in power at the time to mitigate will have any expert knowledge or incentive to mitigate well in advance?

But you say I have misunderstood all that and no one are actually saying B?
 
Last edited:
  • #241
russ_watters said:
It's very much like an earth-sized asteroid in that way. You're of course free to live in fear of it, but it's pointless to try to defend against it.
If we were really talking about asteroids or other background/existing high risk scenarios were we do not start out being in control, yes, but here we are talking about AI technology where we humans pace new technology forward presumable with the expectation that we should remain in control over any introduced risks. And in this regards my question (fuel by current world trends) is how do we stay in control if one of the risk-increasing factors seems to be loss of effective risk control, i.e. the issue that at all times along the path towards some of the worst-case scenarios those with the actual power to mitigate risks are never going to mitigate this particular risk for reason that seems acceptable to them at that point. I accept that there are people here who, for reasons I probably never fully get, find such questions silly, irrelevant, or outside what they feel they can constructively participate in, which is all fine, but the question still stands and is to me is as relevant as ever.

I have no illusion that a discussions on PF is going to change the world, but I still have the naive hope we can have a constructive discussion about it. The reason for this, I think, is that I, for one, would really like to hear a good technical argument for why I don't have to worry about the worst-case scenarios, but so far all I have heard the usual risk brush-off arguments along the line "the scenarios are all wrong and will never happen" or "its too complex to think about, anything can happen, so ignore it until we have clear and present danger". If people have are aware of a scientific/technical reason for why a class of scenarios or even a specific scenario will not happen or why the consequences are guarnateed be much less severe then I would love to hear it.
 
  • #242
jack action said:
The most notorious "so-called experts" I can think of are the ones claiming to know what aliens' intentions could be:
Who are they and who assessed them as and called them 'expert quality'?
Perhaps it's a bit late to define an expert?
 
  • #243
Filip Larsen said:
I, for one, would really like to hear a good technical argument for why I don't have to worry about the worst-case scenarios,
To have a "good technical argument", you would first need to provide a technical description of the said worst-case scenario.

You cannot even describe what ASI will look like, except "it will be smarter than humans" and "it may want to destroy us". The only kind of arguments anyone can come up with against such a vague description would be along the lines, "Humans will have mastered good ASI machines to fight back and destroy the bad ASI machines."

But then you could add, "But if the solution from the good ASI is to build an even more efficient ASI to destroy the bad ASI, and then that super-ASI turns against humankind as well, what will we do?" It never ends.

It is impossible to raise a technical argument in a discussion about something that not only doesn't exist, but we still struggle to imagine how it would work.

From my point of view, the most technical arguments you can obtain for your ill-defined worst-case scenarios are the Three Laws of Robotics. And if you think I'm joking, 15 years ago, some experts had already studied Asimov's rules to define the Five Ethical Rules for Robotics. The CEO of Microsoft did something similar 5 years later. I fail to see how you can get more technical.
 
  • Like
Likes russ_watters
  • #244
DaveC426913 said:
Us, programmers: "It's 1990. There's no danger. There's no way any of our code will still be in use ten years from now."

heh. Ahahahahahaha!
 
  • #245
jack action said:
It is impossible to raise a technical argument in a discussion about something that not only doesn't exist, but we still struggle to imagine how it would work.
I disagree.

Companies employ strategic risk management all the time to navigate business risks, and some of those risks are surely associated with new hyped technology or other similar incoming changes that is characterized with a lot of unknowns, and they wouldn't do this if it was impossible to manage risks in situations with a lot of unknowns. Perhaps the keyword to stress here is "strategic thinking", i.e. the ability to analyze and suggest measures that will navigate towards "good" and away from "bad" without knowing in advance exactly how each tactical situation will play out.

And when I say "technical arguments" I mean arguments and counter-arguments that points towards mechanisms that are known, i.e. physical limits, human psychology and behavior, dynamics in competitive markets, etc. This is also the type of arguments that some of the worst-case scenarios employ, so useful counter-arguments "only" needs to address things at this level.

jack action said:
To have a "good technical argument", you would first need to provide a technical description of the said worst-case scenario.
I agree that the constructive discussion I naively seek to spur in existing threads so far has failed to materialize on PF, but perhaps its worth a shot to aim for a specific scenario in a separate thread. Or maybe PF just isn't the right place for this sort of discussion.
 
  • #246
Filip Larsen said:
I agree that the constructive discussion I naively seek to spur in existing threads so far has failed to materialize on PF, but perhaps its worth a shot to aim for a specific scenario in a separate thread. Or maybe PF just isn't the right place for this sort of discussion.

Check out post 123 where I gave a link to a scenario developed by the research group AI Futures Project. They also give a discussion of their methodology for developing this scenario.
 
  • Like
Likes Filip Larsen
  • #247
gleem said:
I gave a link to a scenario developed by the research group AI Futures Project.
Nice story. If this were the slightest close to the truth, it is already too late to do anything, so why bother?

From the summary:
Millions of ASIs will rapidly execute tasks beyond human comprehension.
This will never happen. To say that humans will blindly use drugs, cures, machines, etc., that they have no clue how they work, is insanity; in less than 2 years, nonetheless. Do you see yourself, 2 years from now, injecting a new drug in your body, say to cure cancer, that nobody understands how it works, and experts only rely on the stamp "made by ASI" and the fact that it cured cancer in every patient until now?

A lot of people are afraid of getting vaccinated by medical experts today, imagine being vaccinated based on a machine's recommendation! Plus, our natural curiosity will prevent this: we have to know, to understand.

Furthermore, this utopia is based on our predisposition to think that Nature can be improved. But most "improvements" we make usually break the balance, and something else breaks somewhere else. It is difficult to imagine that a superintelligence is the solution to this problem. Superintelligence might just laugh at our naivety.

In our AI goals forecast we discuss how the difficulty of supervising ASIs might lead to their goals being incompatible with human flourishing.
This one is a very pessimistic guess.

So let's assume our utopia is real. Nature can be improved, and a superintelligence can achieve that. A superintelligence that we cannot even imagine with our current intelligence level. Yet, we estimate what the superintelligence's goals will be ... based on our current human (flawed?) intelligence. Is this a realistic scenario? Or is it equally possible - even more probable - that our utopian superintelligence will have the solutions to satisfy everybody? Otherwise, what makes this intelligence "super"?

If an individual or small group aligns ASIs to their goals, this could grant them control over humanity’s future.
So, here, we have the other very pessimistic scenario. Some humans can take control of ASI, and, of course, they will be bad people doing bad things to humanity. Why on Earth would anyone want the worst for their people? Especially when seconded by a superintelligence that has the solutions to all problems.

https://ai-2027.com/slowdown said:
Sometime around 2030, [...]

[...]

The rockets start launching. People terraform and settle the solar system, and prepare to go beyond. [...]
This is delusional. There is no way we will terraform and settle the solar system by 2030, with superintelligence or not. It is already very hard to imagine that Mars can be terraformed; beyond that is pure fantasy, especially within a few years. (It can take up to 6 years just to reach Jupiter, the next planet after Mars.)

There were clearly no experts in these domains weighing in on this scenario. This is the hype. The hype coming from ASI "experts", who believe so much in the capabilities of their future ASI that they oversell it beyond reality.

And this is why I cannot consider these "technical arguments", because they do not seriously correspond to the definition of mechanisms that are known, like physical limits or human psychology and behavior:
Filip Larsen said:
arguments and counter-arguments that points towards mechanisms that are known, i.e. physical limits, human psychology and behavior, dynamics in competitive markets, etc.
 
  • #248
sbrothy said:
"efficiency" vs "efficacy".
The context of use would resolve any confusion.

Now back on the track.
jack action said:
, that nobody understands how it works,
Very few people have actually 'understood' anything. I have a feeling that, in time, there will be perceived advantages and perceived perils as a result of AI. They can't be predicted accurately. Isn't that the same as with all 'advances' in science and Technology?

The latest clear and present peril on the menu is what the social media are doing to us. A huge casualty is the reduction of attention span for many / most users. That is truly scary. Politicians welcomed that with open arms and they could see money in it. Those risks have not been addressed by the decision makers and nor will the risks of AI.
 
  • #249
sophiecentaur said:
The latest clear and present peril on the menu is what the social media are doing to us. A huge casualty is the reduction of attention span for many / most users
True. Although I'd say the even bigger dangers are:
1. The misuse of social media as news sources
2. The plethora of sources, resulting in individuals choosing ever more focused and biased sources, thereby making it much easier to ignore inconvenient news.

The sheer depth and breadth of the availability of media counterintuitively results in a narrower and less-informed audience, as well as encouraging audience polarization.

"A man hears what he wants to hear and disregards the rest."
 
  • Like
Likes PeroK and sophiecentaur
  • #250
jack action said:
Some humans can take control of ASI, and, of course, they will be bad people doing bad things to humanity. Why on Earth would anyone want the worst for their people?
What some see as good for their people does not seem all that good, consider Mao Zedong, Joseph Stalin, Pol Pot, Hibatullah Akhundzada (Taliban), or currently Vladimir Putin.

OK, we might not believe that AI will be the demise of Humanity but we must believe it will have a profound effect. We already are seeing the effect it is having on education, i.e., letting it do some of the thinking for us, not that many are doing all that much thinking. What do you say to your kids who with a smart phone in hand ask why go to school or why do we have to learn this or that?

Yikes, I just had a window open up about AI controversy while writing this post. I tried to expand it to read more but it closed. Is AI watching what I am writing? Has anybody had a similar experience?
 
  • Like
Likes PeroK and sbrothy
Back
Top