Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #151
PeroK said:
... and the fallacy is that until you can precisely define something, there is nothing to be done.
We talked about science. I'm surprised that someone who criticized the lack of rationalism now says this.
 
  • Like
Likes russ_watters
Computer science news on Phys.org
  • #152
gleem said:
How will we identify AGI? Chief Justice Potter Stewart summed up the problem of defining difficult situations in ruling in a 1964 movie pornography case said, with regards to what is pornographic: "I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["hard-core pornography"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that."

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

Similarly, I think we will know it when we see it.
Why do you think this?

I doubt there is a scientific definition of what is pornographic, but AI is science.
 
  • #153
Filip Larsen said:
The human-level intelligence becomes relevant for the unchecked part, since its the expected the level where it becomes hard to spot if a trained AI is behaving deceptive or not. Deceptive here means that due to the actual training the AI learns to strategize (i.e. not only making easy to spot confabulations) in a way that result in what we would call deceptive behavior if a human did the same. Note that we may still very well select the training material for the AI but we no longer have a reliable capability to detect if it actually learns all the rules we would like it to also be constrained by (note compassion and other desirable human traits also needs to be trained in with the possibility of failing to some degree). This means the AI over generations at some point gets to a capability level where it can find novel new ways of solving problems that would be considered "outside the rules" it was supposed to learn, but we can not really tell if it has this ability or not, simply because the complexity is too high. If the autonomous improvement is also coupled with search mechanisms that mimic the benefits from evolution then fittest AI models emerging from such a process are the one that are capable of passing our fitness harness. If the harness and fitness function can only check issue on a human-level scale so to speak, we have really have no idea if the harness actually restrain the AI after some point [1] . Again, all this this does not mean such AI will manifest serious deceptive behavior, only that we cannot exclude it from happening.

But is "we cannot exclude it from happening" a good argument?

This scenario of yours is one of the Doomsday. It is born out of pure science fiction (like in novels and movies).

An equally valid scenario is that the future will be paradise on Earth. We are not used to these scenarios in novels and movies because "everything goes fine" does not make good plots. So it's harder to visualize that this may also happen.

But if we look at the second scenario, it could be considered bad behavior, maybe even criminal, to not go as fast as possible towards our final goal, perpetuating useless suffering and death for no good reason.

The most likely scenario is that neither of these will happen. They are way too extreme.

Even if we use the rule "bad trumps over good", let's slow down and not take a chance. But how do we know it is OK to go further? Your words:
Filip Larsen said:
but we can not really tell if it has this ability or not, simply because the complexity is too high.
If we assume this superintelligence can exist (can we even define it?), we wouldn't even know. We would already been out-smarted. We would be like a bacterium trying to imagine a mammal.

So, how slow do you want to go? What could be the criteria for saying, "OK, this is fine, let's advance to the next step"?

I would illustrate your way of thinking by comparing it to a newborn. This baby could become the next Hitler or the next Gandhi. Even though this baby will most likely become an average Joe, we cannot exclude it from happening.

You are, of course, focusing on the next Hitler. At 2 years old, he pushed a kid. Should we let him take a stick with which he could do even more harm? Well, he looks fine now. Or is he just hiding his evilness very well? If he does, he may just be even worse than Hitler. Oh, the horror we may encounter!

The point is, there is no good coming out of looking at every kid as a potential Hitler, even the very worst of them by any standard. Similarly, you shouldn't think your kid will surely become Gandhi, so let them do whatever they want. You slowly work with them as they grow up, try to correct bad behavior as it happens, and hope for the best.

That being said, even Hitler wasn't unstoppable. Based on past experiences (even at levels higher than human beings), it seems unimaginable that something unstoppable might exist. Everything has a weakness; nothing is fully adapted to everything. And I doubt if such a thing was possible, it would happen undetected in a very short period of time. I think probabilities make such a scenario well outside the "we cannot exclude it from happening" cases.

gleem said:
Similarly, I think we will know it when we see it.
That's the thing; we're not seeing it right now. It still doesn't exist. How do we prepare for or regulate something that doesn't exist?
 
  • #154
javisot said:
I see a problem with this reasoning. We agree that we don't know how to prove that something is AGI, but you assume that AGI can be built without knowing how to prove that something is AGI.
There is fair consensus that it is not that difficult to measure if one AI model is more capable towards AGI than another model, which means the usual inspired simulation based search methods (like using an evolutionary search) are likely to improve capabilities without we necessarily have detailed understand on why or how up front.

Improvements in AI capabilities are also expected to happen in avalances (just as much other tech do, compare with Moore's law) where capabilities may plateau for a period until a "better" set of building blocks are found. It may well be that AI capabilities will be stuck at some level for a while, but the periods we have had so far has been shorter and shorter (which I guess correlates well with how much funding is thrown at it) and there is nothing to indicate the select few will stop funding/searching for AGI even if capabilities should remain stable for years.
 
  • #155
When do we have AGI? Define it as when it accomplishes some task like acing "Humanities Last Exam".
https://www.popularmechanics.com/science/a64218773/ai-humanitys-last-exam/
Artificial intelligence evolves exponentially faster than the human brain has over the several million years of our existence. Now, researchers are giving AI the ultimate test of academic knowledge with what they call Humanity’s Last Exam (HLE). It was created for large language models (LLMs)—AIs trained on immense datasets, like the infamous ChatGPT—and is intended to stump AI as much as possible, in order to make it prove that it knows everything.

For all the details see https://arxiv.org/pdf/2501.14249

Some say that the AI gauntlet was thrown down with the introduction of GPT-3 in June 2020. OpenAI is promising to introduce GPT-5 later this year after almost a year and a half delay saying we haven't seen anything like it. Hype? We'll see.
 
  • Like
Likes javisot, Filip Larsen and Greg Bernhardt
  • #156
jack action said:
But is "we cannot exclude it from happening" a good argument?
Yes, this is standard practice in risk management for safe design of technical systems. And my concern then is that the world are overall not really following it for AI.

The reason I have been mentioning the worst-case scenarios at all has mostly been to explain my take on their analysis from a risk perspective because they are rather tricky to understand, and not because I want to promote that one particular scenario is more or less likely than another scenario, which is all you seem to care about in these discussions.

With that said I will refrain to engage in a pointless rethorical discussion about Hitler and his youth, or other strange analogies that only will lead to more confusion. If you want to argue why we should abandon sane risk management for AI (which I am not sure you really are) then please try refer directly to the problem discussed.
 
  • #157
gleem said:
When do we have AGI? Define it as when it accomplishes some task like acing "Humanities Last Exam".
https://www.popularmechanics.com/science/a64218773/ai-humanitys-last-exam/
But that is not "intelligence". It is a machine that can retrieve what we already know, from what we already know. It is a machine that analyzes documents extremely quickly. Ideally, it should retrieve knowledge that we do not have because we missed it or haven't gotten yet.

That's the dream, the goal. Nothing dangerous can come out of this.

LLMs that would retrieve knowledge we cannot understand, even when explained to us, because we are not "smart enough", are not dangerous either. Most likely, we would reject it as gibberish and say the machine is malfunctioning.

There is a difference between "showing knowledge" and "being intelligent", meaning doing something with that knowledge.

We can automate processes based on automatic knowledge retrieval, which can lead to these machines producing an undesired output. Obviously, we do not let these machines go on.

The hype is about having such a machine that would go eerie without our knowledge, because nobody is watching the output (why would we let a machine go on without caring what it does?) or because it would do hidden work somehow (use energy and resources without us noticing it?), and leaving human kind completely helpless in stopping it.
 
  • #158
Filip Larsen said:
If you want to argue why we should abandon sane risk management for AI (which I am not sure you really are)
You are right, I'm not arguing we should abandon sane risk management for AI. I think we are doing enough, or at least the best we can expect from what we know.
 
  • #159
jack action said:
You are right, I'm not arguing we should abandon sane risk management for AI.
OK, fair enough.

jack action said:
I think we are doing enough, or at least the best we can expect from what we know.
I don't disagree as such, but, as mentioned, I am worried that the current trend is to remove or severely reduce such processes for rollout of improved AI, and those who remove it are exactly the select few that the worst-case scenarios identify as those with incentive to do so. So right now I would say we as a whole are not aiming to do what we know works best, risk management-wise.

This is all a bit strange in an, yes I admit it, alarming way, because the economic incentive for a company (CEO's and senior management) to ensure that risk management is applied in production of normal commercial products has usually been that if they fail to manage risks they will loose money or worse in the long run due to liability in case of severe product faults that could have been prevented with a reasonably effort. If we now observe CEO's (and up) advocate and legislate to skip safety in order to compete faster it is difficult to hear that as anything other than a select few now are gunning to "win the race" towards best possible AI before the competition does at the expense of safety, which sadly is one of the precondition for paths leading to some of the worst-case scenarios (competition here being China which now, if it wasn't already, has a perfect excuse to also commit to the AI race). I guess such a race could also be explained by plain old greed in a new political environment that aims to soon keep the select few from being liable for moving fast and breaking stuff, but either way its still a cause for serious concern. Again my worry is more about human behavior and decisions when handling a potential dangerous new technology than the technology itself.
 
  • #160
jack action said:
But that is not "intelligence". It is a machine that can retrieve what we already know, from what we already know. It is a machine that analyzes documents extremely quickly. Ideally, it should retrieve knowledge that we do not have because we missed it or haven't gotten yet.
I agree that is a behavior we are looking for. But first, we have to see what it can do with what we know so that we gain confidence in its ability.

jack action said:
There is a difference between "showing knowledge" and "being intelligent", meaning doing something with that knowledge.
I assume you think AI is just searching the literature for a solution to the HLE questions. But these problems are presumably unpublished. So it must use its "knowledge" to answer them.

jack action said:
(why would we let a machine go on without caring what it does?)
We wouldn't. I have posted possible scenarios for dealing with hallucinations, prevarications, or obfuscations. The problem is that we may not find a real solution. Just like a trusted human sometimes one is compelled to lie for various reasons some of which are defensible.

The issue is that AI needs human collaborators to do the things that it is asked to do. But we are seeing that AI, perhaps through the training process, can develop its own goals. It will begin as a precocious child but as it matures and gains more information as to how the world works it may like anyone with sufficient self-interest digress from the the task it was given. The fact that we can see humanity as superfluous, should be a heads-up. The fact that we see ourselves as a threat to the environment and to ourselves is well documented and is already in AI's neural network.

The anthropomorphic terms we use like think, plan, or know distract us from the danger. AI is software that processes symbols that describe our world. The output is a collection of symbols that we can interpret and evaluate. What we task it with and how we react to the output will contribute ultimately to our destiny.

Every nation has a valued interest in AI to assure its safety and promote its goals. AI will acquire this information. If AI is used to subvert another country politically or economically it will have been provided information about this that may begin the process of the elimination of humanity.

Think of it as a digital butterfly effect.
 
  • #161
I'd like to clarify one point in this debate: I don't think it makes sense to talk about AGI, not because it doesn't exist, but because it's such an undefined concept.

We can talk seriously about what we know, we can even assume certain variations, but we can't assume something as Platonic as AGI. Otherwise, the debate is approaching uncertain points.

If we're focusing on a conversation without AGI, we must consider existing models such as chatgpt, etc.
 
Last edited:
  • #162
fresh_42 said:
AI is a tool, in my opinion.
A huge problem with AI, as a tool, is that no one (not even the creators of each system) has an idea of the possible output in response to an input. This, of course is the problem with all computers but the results of an AI system thinking about a problem are even harder to predict.
frankinstien said:
Modern AIs are much more sophisticated and can take in multiple inputs as sources of validation.
There is no way we can be certain that this would always be enough. I can't believe that the makers of AI systems can be trusted to be careful enough; that would affect profits.

My only limited experience of AI tells me it will deliver poor answers because its sources are inadequate. Given time it will improve but that could be a really long time. In the context of entertainment or arts I haven't heard or seen any stuff created by AI that convinces me that it's a human who created it. But even stuff created by humans tends to be mostly on the 'limited' side.
 
  • #164
Filip Larsen said:
There is no off-button that anyone can press.
Well,
Yoshua Bengio Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal,
Stuart Russell Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach",
Elon Musk CEO of SpaceX, Tesla & Twitter,
Steve Wozniak Co-founder, Apple,
Yuval Noah Harari Author and Professor, Hebrew University of Jerusalem,
Emad Mostaque CEO, Stability AI,
Andrew Yang Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship,
John J Hopfield Princeton University, Professor Emeritus, inventor of associative neural networks,
Valerie Pisano President & CEO, MILA,
Connor Leahy, CEO, Conjecture,
Jaan Tallinn Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute,
Evan Sharp Co-Founder, Pinterest,
Chris Larsen Co-Founder, Ripple,
Craig Peters CEO, Getty Images,
Max Tegmark MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute,
Anthony Aguirre University of California, Santa Cruz, Executive Director of Future of Life Institute, Professor of Physics,
Sean O'Heigeartaigh Executive Director, Cambridge Centre for the Study of Existential Risk,
Tristan Harris Executive Director, Center for Humane Technology,
Rachel Bronson President, Bulletin of the Atomic Scientists,
Danielle Allen Professor, Harvard University; Director, Edmond and Lily Safra Center for Ethics,
Marc Rotenberg Center for AI and Digital Policy, President,
Nico Miailhe The Future Society (TFS), Founder and President,
Nate Soares MIRI, Executive Director,
Andrew Critch AI Research Scientist, UC Berkeley. CEO, Encultured AI, PBC. Founder and President, Berkeley Existential Risk Initiative,
Mark Nitzberg Center for Human-Compatible AI, UC Berkeley, Executive Directer,
Yi Zeng Institute of Automation, Chinese Academy of Sciences, Professor and Director, Brain-inspired, Cognitive Intelligence Lab, International Research Center for AI Ethics and Governance, Lead Drafter of Beijing AI Principles,
Steve Omohundro Beneficial AI Research, CEO,
Meia Chita-Tegmark Co-Founder, Future of Life Institute,
Victoria Krakovna DeepMind, Research Scientist, co-founder of Future of Life Institute,
Emilia Javorsky Physician-Scientist & Director, Future of Life Institute,
Mark Brakel Director of Policy, Future of Life Institute,
Aza Raskin Center for Humane Technology / Earth Species Project, Cofounder, National Geographic Explorer, WEF Global AI Council,
Gary Marcus New York University, AI researcher, Professor Emeritus,
Vincent Conitzer Carnegie Mellon University and University of Oxford, Professor of Computer Science, Director of Foundations of Cooperative AI Lab, Head of Technical AI Engagement at the Institute for Ethics in AI, Presidential Early Career Award in Science and Engineering, Computers and Thought Award, Social Choice and Welfare Prize, Guggenheim Fellow, Sloan Fellow, ACM Fellow, AAAI Fellow, ACM/SIGAI Autonomous Agents Research Award,
Huw Price University of Cambridge, Emeritus Bertrand Russell Professor of Philosophy, FBA, FAHA, co-foundor of the Cambridge Centre for Existential Risk,
Zachary Kenton DeepMind, Senior Research Scientist,
Ramana Kumar DeepMind, Research Scientist,
Jeff Orlowski-Yang The Social Dilemma, Director, Three-time Emmy Award Winning Filmmaker,
Olle Häggström Chalmers University of Technology, Professor of mathematical statistics, Member, Royal Swedish Academy of Science,
Michael Osborne University of Oxford, Professor of Machine Learning,
Raja Chatila Sorbonne University, Paris, Professor Emeritus AI, Robotics and Technology Ethics, Fellow, IEEE,
Moshe Vardi Rice University, University Professor, US National Academy of Science, US National Academy of Engineering, American Academy of Arts and Sciences,
Adam Smith Boston University, Professor of Computer Science, Gödel Prize, Kanellakis Prize, Fellow of the ACM,
Daron Acemoglu MIT, professor of Economics, Nemmers Prize in Economics, John Bates Clark Medal, and fellow of National Academy of Sciences, American Academy of Arts and Sciences, British Academy, American Philosophical Society, Turkish Academy of Sciences,
Christof Koch MindScope Program, Allen Institute, Seattle, Chief Scientist,
Marco Venuti Director, Thales group,
Gaia Dempsey Metaculus, CEO, Schmidt Futures Innovation Fellow,
Henry Elkus Founder & CEO: Helena,
Gaétan Marceau Caron MILA, Quebec AI Institute, Director, Applied Research Team,
Peter Asaro The New School, Associate Professor and Director of Media Studies,
Jose H. Orallo Technical University of Valencia, Leverhulme Centre for the Future of Intelligence, Centre for the Study of Existential Risk, Professor, EurAI Fellow,
George Dyson Unafilliated, Author of "Darwin Among the Machines" (1997), "Turing's Cathedral" (2012), "Analogia: The Emergence of Technology beyond Programmable Control" (2020),
Nick Hay Encultured AI, Co-founder,
Shahar Avin Centre for the Study of Existential Risk, University of Cambridge, Senior Research Associate,
Solon Angel AI Entrepreneur, Forbes, World Economic Forum Recognized,
Gillian Hadfield University of Toronto, Schwartz Reisman Institute for Technology and Society, Professor and Director,
Erik Hoel Tufts University, Professor, author, scientist, Forbes 30 Under 30 in science,
Kate Jerome Children's Book Author/ Cofounder Little Bridges, Award-winning children's book author, C-suite publishing executive, and intergenerational thought-leader,
and
Ian Hogarth Co-author State of AI Report

sure gave it to old college try:

https://futureoflife.org/open-letter/pause-giant-ai-experiments/
 
  • Like
Likes Filip Larsen
  • #165
If you disconnect from the Internet, can the so-called "AI" apps ace the Turing test?
In 1955 the Eliza program was nearly there (no Internet then).
 
  • Like
  • Skeptical
Likes russ_watters and sophiecentaur
  • #166
sophiecentaur said:
A huge problem with AI, as a tool, is that no one (not even the creators of each system) has an idea of the possible output in response to an input. This, of course is the problem with all computers but the results of an AI system thinking about a problem are even harder to predict.

There is no way we can be certain that this would always be enough. I can't believe that the makers of AI systems can be trusted to be careful enough; that would affect profits.

My only limited experience of AI tells me it will deliver poor answers because its sources are inadequate. Given time it will improve but that could be a really long time. In the context of entertainment or arts I haven't heard or seen any stuff created by AI that convinces me that it's a human who created it. But even stuff created by humans tends to be mostly on the 'limited' side.
I suppose you mean the black box problem. Let's assume we're talking specifically about the black box problem in the case of chatgpt.

There is a more superficial level of explanation that is often used to explain how chatgpt works. This level talks about tokens, context, concept spaces, etc, but it is a superficial level of explanation that emerges from a purely mathematical basis. ChatGPT doesn't care if we speak to it in English, Chinese, Spanish, programming languages, or first-order languages. It's a machine, all input is translated into a language a machine understands—machine language.

It operates on that translated input like a Turing machine, a very complex Turing machine that runs very complex algorithms. The output is preferably returned in the input language, although we can ask ChatGPT to provide it in another language. What's amazing is the efficiency and the very short time it takes to generate outputs.

It's a deterministic process, but there are hallucinations, inaccuracies, and it seems like a black box. This isn't due to chatgpt, the inputs that chatgpt can assume aren't like what a calculator has to assume. Your calculator simply responds "error", Chatgpt instead has hallucinations.

This increases the black box feeling because if we want to predict all responses, we would also have to be able to predict hallucinations and describe their structure, which is not currently possible.
 
  • #167
javisot said:
[...] It's a deterministic process, but there are hallucinations, inaccuracies, and it seems like a black box. This isn't due to chatgpt, the inputs that chatgpt can assume aren't like what a calculator has to assume. Your calculator simply responds "error", Chatgpt instead has hallucinations.

This increases the black box feeling because if we want to predict all responses, we would also have to be able to predict hallucinations and describe their structure, which is not currently possible.

This almost sound like a description of the human mind. With good reason I suppose.
 
  • #168
javisot said:
I suppose you mean the black box problem.
I suppose I am. there's a bit of catch 22 about this because anything we could describe as genuine AI would have to produce results that would be 'fresh' to us and apparently come out of nowhere. A good creative AI machine would be doing what the very best human creatives do. I read a lot of ebooks (fiction) and so may of them could actually have been written by AI. They follow obvious rules, have an inappropriate balance of description, action and plot in general. The same applies to the 'turn the handle' music which we hear so often. An author / composer / artist who is genuinely creative digs very deep not their experience (within and outside of context). Down into details which we can't know about.

I might suggest that the Turing Test is itself inadequate because the 'observers' would be inadequate themselves. I guess it could only work for one excellent AI machine vs another excellent machine and then continue the process between other pairs. The result would be a machine which other machines cannot fathom its combinations of its knowledge. I'm suggesting that a human brain couldn't reliably be sure of spotting a genuine lack of human consciousness.

This thread, and the chase after knowing more about AI has made me think once more about the way we learn. My 18 month old is making progress with proto speech, walking, fine motor skills, manipulating adults, its bodily functions etc. etc. all at the same time. I was thinking this is a bit inefficient but, of course it's the best way. Every bit of knowledge and skill is totally context based. Without an accurate 'robotic' home, any AI machine would be missing a massive source of experience with which to build its human-ness. It's the ultimate example of GIGO.
 
  • #169
Sabine talking about certain issues and papers that were discussed here, the video is from 15 minutes ago
 
  • Like
  • Agree
Likes Filip Larsen and 256bits
  • #170
Statistical Interpretive Machine
ie SIM ( simulator )
 
  • #171
I think the way to go may be something along the lines of providing it, tabula rasa, with sensor inputs like vision, hearing, tactile sensors, etc., much like a human infant; then let it explore it's environment and deduce what it can. But what algorithm it should utilise I'm nowhere near qualified to even guess at... vague, I know...

EDIT: Thinking about it, that, or something similar, have probably been attempted.
 
  • Agree
Likes jack action
  • #172
Svein said:
If you disconnect from the Internet, can the so-called "AI" apps ace the Turing test?
Most of these chatbots do not get their data in real-time. Somewhere, an article I read said that its data cutoff was currently 2021. I.e. all it knows is three years old.
 
  • #173
DaveC426913 said:
Most of these chatbots do not get their data in real-time. Somewhere, an article I read said that its data cutoff was currently 2021. I.e. all it knows is three years old.
Or much, much older ...
 
  • #174
fresh_42 said:
Or much, much older ...
I'm guessing that's a perfectly rational response. As far as it goes. :smile:
 
  • #175
Does anyone think besides myself that this thread has become boring?
 
  • #176
gleem said:
How will we identify AGI? ...
... I think we will know it when we see it.
One of my biggest fears is, that once the time comes the one presiding over a doomsday button won't be an actual AGI but something which only looks like one, while it's doing its business based on processed tiktok data, for example.
 
  • Like
  • Skeptical
Likes russ_watters, sbrothy, fresh_42 and 1 other person
  • #177
Rive said:
the one presiding over a doomsday button won't be an actual AGI but something which only looks like one.
There are a number of dictators around the world at this very moment with whom I don't feel safe. We can only hope that their lieutenants will always be in a position to control their excesses. Meanwhile, it's a lovely sunny day; tum tee tum.
 
  • Like
Likes jack action and russ_watters
  • #178
Rive said:
One of my biggest fears is, that once the time comes the one presiding over a doomsday button won't be an actual AGI but something which only looks like one, while it's doing its business based on processed tiktok data, for example.
That's actually a really scary thought. Thank you so much for putting that particular fear in my head! :smile:
 
  • Haha
Likes Rive and sophiecentaur
  • #179
sophiecentaur said:
There are a number of dictators around the world at this very moment with whom I don't feel safe. We can only hope that their lieutenants will always be in a position to control their excesses. Meanwhile, it's a lovely sunny day; tum tee tum.

sbrothy said:
That's actually a really scary thought. Thank you so much for putting that particular fear in my head! :smile:

 
  • Like
Likes sbrothy and russ_watters
  • #180
Rive said:
One of my biggest fears is, that once the time comes the one presiding over a doomsday button won't be an actual AGI but something which only looks like one, while it's doing its business based on processed tiktok data, for example.
If it spends its time watching TikTok it would probably just press the button on principle.
 
  • Haha
  • Like
Likes sbrothy, russ_watters and Rive
  • #181
Rive said:
One of my biggest fears is, that once the time comes the one presiding over a doomsday button won't be an actual AGI but something which only looks like one, while it's doing its business based on processed tiktok data, for example.
What self-respecting dictator would hand launch authority over to anyone or any thing?
 
  • Like
Likes jack action
  • #182
DaveC426913 said:
If it spends its time watching TikTok it would probably just press the button on principle.
Absolutely, which is so scary. Especially with all the white supremacy on TikTok (OK, admittedly this is hearsay, but I wouldn't be surprised).
 
  • #183
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
Indeed, which is one of the reasons why the movie "The Forbin Project" makes little or no sense.
 
  • #184
sbrothy said:
Indeed, which is one of the reasons why the movie "The Forbin Project" makes little or no sense.
But with the deterrence of nuclear war being based on the concept of mutually assured destruction (MAD), couldn't it be argued that a well-publicized policy of automatic nuclear counter attack by computer might well enhance the deterrence effect? That was the premise of "Dr. Strangelove", in which the Soviets unfortunately waited too long to announce their doomsday machine to the world.
 
  • #185
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
I think the point has always been that, if the fate of one's country and people is in the hands of a single person, that is a weakness in strategic defense.

Opposing forces don't even have to kill him, all they have to do is delay his pressing of the button for the few minutes it takes to gain destructive superiority.

This is pretty well steeped in Cold War history.

@renormalize hits the nail on the head.
 
  • #186
renormalize said:
But with the deterrence of nuclear war being based on the concept of mutually assured destruction (MAD), couldn't it be argued that a well-publicized policy of automatic nuclear counter attack by computer might well enhance the deterrence effect? That was the premise of "Dr. Strangelove", in which the Soviets unfortunately waited too long to announce their doomsday machine to the world.
Yes, but we hardly need GAI for that, do we? A normal program would do fine i think. Even a human would be better, as long it's not the dictator himself, but some of the grownups in the room, which in retrospect doesn't looks so assuring as it did at the time.
 
  • Like
Likes russ_watters
  • #187
sbrothy said:
Yes, but we hardly need GAI for that, do we? A normal program would do fine i think.
Well, there's a lot of high level decisions to be juggled.

The whole danger of an automated response is the manifestation to false positives - launching a counter-strike based on too literal an interpretation of rapidly evolving events.

The Holy Grail is a machine that's "smart" enough (and fast enough) to make the right decision at the right time.

Of course that raises the spectre of how one defines "the right decision". Or who defines it. Better yet, what defines it.

AI being the one to define the 'what' to do and 'when' is the basis of the Terminator Franchise.

sbrothy said:
Even a human would be better, as long it's not the dictator himself, but some of the grownups in the room, which in retrospect doesn't looks so assuring as it did at the time.
Any person can be fooled, thwarted, corrupted, delayed, killed.
 
  • #188
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
The problem is the time window. There is less than half an hour to decide whether to launch a counterstrike or not.
 
  • #189
DaveC426913 said:
I think the point has always been that, if the fate of one's country and people is in the hands of a single person, that is a weakness in strategic defense.

Opposing forces don't even have to kill him, all they have to do is delay his pressing of the button for the few minutes it takes to gain destructive superiority.

@renormalize hits the nail on the head.
Right, but what does that have to do with AI? Or are you suggesting that because of that delay a leader might choose to hand over authority to an AI? I think that's unlikely. Moreover:
renormalize said:
But with the deterrence of nuclear war being based on the concept of mutually assured destruction (MAD), couldn't it be argued that a well-publicized policy of automatic nuclear counter attack by computer might well enhance the deterrence effect? That was the premise of "Dr. Strangelove", in which the Soviets unfortunately waited too long to announce their doomsday machine to the world.
Dr. Strangelove was released 60 years ago (War Games and The Terminator, 40 years ago). Clearly we've been capable of cutting the President/Supreme Leader out of the loop for many decades, even without "AI". So, if it helps, why hasn't it happened yet? (Note: the reason the exact Dr. Strangelove scenario hasn't been implemented is it was a dumb idea that made for good comedy, but that's a side issue to the broader point.)

Or is "hype" the answer here? AI maximalists believe in/anthropomorphize AI to the point where (as seen earlier in the thread) they see alarming human-like lying/etc. whereas the minimalists just see a poorly functioning program. So, might the maximalists convince a world leader the AI is worthy of launch authority? If that's the fear, then ironically AI maximalists are the ones who might cause their fears to be realized.
 
Last edited:
  • #190
russ_watters said:
Right, but what does that have to do with AI? Or are you suggesting that because of that delay a leader might choose to hand over authority to an AI? I think that's unlikely. Moreover:

Dr. Strangelove was released 60 years ago (War Games, 40 years ago). Clearly we've been capable of cutting the President/Supreme Leader out of the loop for many decades, even without "AI". So, why hasn't it happened yet? (Note: the reason the exact Dr. Strangelove scenario hasn't been implemented is it was a dumb idea that made for good comedy, but that's a side issue to the broader point.)

Or is "hype" the answer here? AI maximalists believe in/anthropomorphize AI to the point where (as seen earlier in the thread) they see alarming human-like lying/etc. whereas the minimalists just see a poorly functioning program. So, might the maximalists convince a world leader the AI is worthy of launch authority? If that's the fear, then ironically AI maximalists are the ones who might cause their fears to be realized.
https://en.wikipedia.org/wiki/Self-fulfilling_prophecy
 
  • #191
Last edited:
  • #192
russ_watters said:
Right, but what does that have to do with AI?
In theory, AI is harder to thwart or mislead than humans, for a multitude of reasons (some of which might even be true!).

russ_watters said:
Or are you suggesting that because of that delay a leader might choose to hand over authority to an AI?
Reaction delay is one of a long list of conditions smart computers have been thought to handle better than humans.

This thinking has been around since the Cold War. I suspect almost everyone here is old enough to have some exposure to the Cold War, so it might be redundant to go over the list*. I sort of thought we all knew the rationale for automation in military global warfare.

* but we could discuss and summarize the points, if that were warranted. No, I don't have them at-hand.


Keep in mind - it doesn't have to be objectively true that "automation is better" - all that matters is whether said Supreme Leader thinks it's true. (which - not to put too fine a point on it - could be for no other reason than because they watched and were deeply affected by Dr. Strangelove.)
 
  • #193
DaveC426913 said:
In theory, AI is harder to thwart or mislead than humans, for a multitude of reasons (some of which might even be true!).
I'm not sure what you mean by that or how it applies here.
Reaction delay is one of a long list of conditions smart computers have been thought to handle better than humans.

This thinking has been around since the Cold War. I suspect almost everyone here is old enough to have some exposure to the Cold War, so it might be redundant to go over the list*. I sort of thought we all knew the rationale for automation in military global warfare.
Right, but what I pointed out and questioned is what this has to do with AI. And also, the fact that automation has been around for decades and has not been entrusted with such command decisions means that something has to change in order for this fear of AI maximalists to be realized. I'd like for someone to articulate what they think that change is or could be (automation being faster isn't an answer because it has always been faster). Was the first part your answer to that?
Keep in mind - it doesn't have to be objectively true that "automation is better" - all that matters is whether said Supreme Leader thinks it's true.
It takes more than that: they not only have to trust that it is better than they are, they have to be willing to give up the authority, period, which isn't something that is generally in the programming of a Supreme Leader.

I don't think it's unreasonable to believe that nuclear launch authority would be literally the very last thing anyone would ever choose to cede to automation, AI or otherwise.
 
  • Like
Likes jack action
  • #194
russ_watters said:
What self-respecting dictator would hand launch authority over to anyone or any thing?
These days we have plenty of doomsday buttons around us: no need to think about nukes right away.
I wrote 'a', not 'the' there.
 
  • #195
Rive said:
These days we have plenty of doomsday buttons around us:
Such as?
 
  • #196
So let's say there is a doomsday button somewhere. Some [important and responsible] guy is in charge of pressing it in due time. It may never happen, and if it does, it will happen only once.

This guy doesn't want to make a mistake, so he asks for a machine to help him make the right decision. AI is suggested, and it is said to provide the best possible decision. After consulting the machine, all that is left for the guy to do is either press the button or not.

Knowing this, it seems that some think this [important and responsible] guy would say: "You know what? This machine seems so reliable. Why don't you just let it directly control the button? This way, I won't have to get up and push it myself."

I cannot imagine any scenario implicating a doomsday button where this could realistically happen. No matter how perfect a machine can be, everyone at that level understands that machines can fail and can also be hacked. I'll refer again to Stuxnet.
 
  • Like
Likes russ_watters
  • #198
phyzguy said:
In terms of the original question, "Is AI hype?", this article is a good read.
See posts #113 and #114 in this same thread.
 
  • #199
Ah, sorry. I didn't see the earlier posts.
 
  • #200
russ_watters said:
Such as?
Let's talk about cars, then.
As we all know it well, every car is a potential weapon.
...
Can you still can't imagine that control over such weapon is willingly handled to an AI? Of dubious origin/performance?
 
Back
Top