Current and Future AI Threats to Humanity and the Human Response

In summary, experts are discussing the evolving/growing dangers of AI, and what to do about it. AI models which can generate images based on text prompts are getting closer and closer to being indistinguishable from real photos, and deep fakes are beginning to have a real world impact.
  • #1
Jarvis323
1,243
986
There has been an explosion of discussion among experts about the evolving/growing dangers of AI and what to do about it. This debate has gone public, largely due to the recent success of generative AI, and the rapid pace of improvement. As an example, AI models which can generate images based on text prompts. About a year ago, some PF members were having fun with a version of DALL·E.

ai-spider-image-png.png


https://www.physicsforums.com/threa...e-rain-and-other-ai-generated-images.1016247/

One year later, and generative AI can create photo-realistic images that are getting closer to being indistinguishable from real photos, such as the one below generated with MidJourney.

FsAMqG4XsAMci0C.jpg


It is predicted that photo realistic AI generated films based on text prompts are a year or so away. AI generated art is also getting very impressive. The POV getting slapped by Will Smith at the Oscars, could have been generated in the style of Picasso, or pretty much in any other notable style that the model had been trained with. It's like going from pong on Atari to Grand Tourismo on Play station 5 in 2 years.

Generative AI also enables deep fakes, and voice cloning. This has began to make a real world impact, exampled by deep fakes used for propaganda in the Ukraine war, and voice cloning used in extortion schemes.

“I pick up the phone, and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” the petrified parent described. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”
...
All the while, she could hear her daughter in the background pleading, “‘Help me, Mom. Please help me. Help me,’ and bawling.”
...
“I never doubted for one second it was her,” distraught mother Jennifer DeStefano told WKYT while recalling the bone-chilling incident. “That’s the freaky part that really got me to my core.”

https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/

The same technology can be used to impersonate people, or generate completely new fake people, for all kinds of purposes.

Simultaneously, the generative text models are getting more impressive at a fast pace, which ChatGPT has famously brought public awareness to. People have warned they pose various threats, from plagiarism, to sophisticated targeted scams, to troll farms and propaganda, to replacing human labor and risking economic instability and joblessness. A few months ago, here on PF, people discussed how they thought that diagrams and plots will need to play a larger role because AI (or ChatGPT) can't see and interpret them. Maybe some of these people assumed this would remain the case. But anyways, GPT-4 can do that now. Multi-modal models, which can read, see, speak, and generate images seamlessly are here already. The kind of models we are building have little in terms of limitations on what kind of input-output distributions they can be used for, or their combinations.

Not long ago, in another thread, people were arguing that AI is limited by the cleverness of the programmer. That idea has failed to be true for years, as modern AI is based on self-learning, rather than programming. The neural networks enabling these complex capabilities are simple, and learn enormously complex behavior themselves from information in datasets through extremely high dimensional gradient descent and back propagation. It was once an open question if gradient descent can succeed, or how far it could take us. It is now obvious to the AI community that it works.

On the plagiarism front, watermarking and heuristics for detection of AI generated content are being developed, but they aren't perfect and can be bypassed with some effort. GPTZero, has been already put into practice at universities, detecting AI generated essays. It is claimed to have a false positive rate of less than 2%. This means somewhere below 2 out of 100 students would be expected to be falsely accused of cheating by default. These would be a certain class of people with a type of writing style. Once such false accusation has already occurred. Fortunately, the student's incremental progress was recorded by google docs.

Quarterman denied he had any help from AI but was asked to speak with the university's honor court in an experience he said caused him to have "full-blown panic attacks." He eventually was cleared of the accusation.

https://www.usatoday.com/story/news...ned-false-cheating-case-uc-davis/11600777002/

To avoid false accusation, students will use these tools themselves. The cheaters will learn to fool them. The people who don't cheat but have a certain writing style will need to be careful, and potentially modify their work in a way to avoid false positives. How this will play out as generative AI gets better and better and more flexible is nearly impossible to predict.

AI safety, or alignment, researchers have long warned about the dangers of smarter than human AI with agency, to humanity. Eliezer Yudkowski famously has been claiming that we will all die with near certainty if this happens. The community around this area of research, are sometimes called AI alignment rationalists. They have been largely looking at this problem through the lens of game theory, where an AI is an optimizing agent, with utility functions, that needs resources. The argument is fairly simple. In most formulations they can come up with, this optimization process leads to the loss of humanity in a simulation. A smarter than human AI can outsmart us, or Dutch book us, out-compete us for resources, and defeat us as an adversary.

In the meantime, other AI experts point to smaller, simpler, current real world impacts. The DAIR institute focuses on warnings of increases in inequality, bias, unfair labor practices, centralized power, exploitation, and negative cultural and psychological impacts. Many of these issues aren't new, we've been grappling with them for years, with recommender systems, and social media, being some examples. Behind the scenes, machine learning has been at play for years now, trained on our personal data, to predict our behavior, to learn how to push our buttons, and influence our decisions. That is the basis for the big-internet-tech economy. That is why these things are free and you are the product. These products have been regulated with a self-regulation model, and their owners/operators have enjoyed near-zero liability for their harms based on Section 230. The powers of generative AI come into play in this regime as a force multiplier. Instead of recommending human created content (e.g. an article, or meme) based on the AI model of the user with the goal of increasing engagement and influence, such content could be generated from scratch based on their model and information. GPT powered Bing Chat is already moving to the seamless insertion of advertisements into AI generated text as users converse with it. The mode is surprisingly simple, you simply ask the Chat model to do that, and it does it. With such levels of customization and detail now automatable, optimization can lead to very personal, and non-uniform flows of information to each individual. The distribution of influences then become non-interpretable and non-controllable, in the same way that the distribution of influences on neurons in a neural network are non-interpretable, and non-controllable.

Besides the legally paid for influences, and the influences which emerge out of human behavior, and psychology, we also have the illegal influences. The same way that Microsoft can work in highly detailed and subtle personalization into your flow of information for the purposes of selling you something or getting you to feel some way about something, criminals can use people's detailed personal information to automatically custom tailor their actions against their targets, and they can do this freely at scale with minimal human labor.

The ability for generative models to write code along with detailed integration and deployment instructions, based on high level text descriptions, unlocks new powers for both programmers and non-programmers alike. People without coding skills are now creating Apps and startups, using generative AI to do nearly all of the coding, and logistics. The same model can come up with the idea, write the code, aid in the legal issues, and so on. On the cyber security front, the models are able to create sophisticated malware and viruses if asked to.

A few days ago, Europol warned that ChatGPT would help criminals improve how they target people online. Among the examples Europol offered was the creation of malware with the help of ChatGPT.
...
He used clear, simple prompts to ask ChatGPT to create the malware function by function. Then, he assembled the code snippets into a piece of data-stealing malware that can go undetected on PCs. The kind of 0-day attack that nation-states would use in highly sophisticated attacks. A piece of malware that would take a team of hackers several weeks to devise.

https://bgr.com/tech/a-new-chatgpt-zero-day-attack-is-undetectable-data-stealing-malware/

Meanwhile, there is a perceived AI arms race between super powers. The US, in fear of other superpowers gaining an AI advantage, has requested funding for development of strategic AI tools of warfare.

The request, which is about $15 billion more than the FY23 ask, designates $1.4 billion for the connect-everything campaign known as Joint All-Domain Command and Control and $687 million for the Rapid Defense Experimentation Reserve, an effort spearheaded by Undersecretary of Defense Heidi Shyu that aims to fill high-priority capability gaps with advanced tech.
...
JADC2 is the Pentagon’s vision of a wholly connected military, where information can flow freely and securely to and from forces across land, air, sea, space and cyber. The complex endeavor likely will never have a formal finish line, defense officials say, and is fueled by cutting-edge communications kit and cybersecurity techniques, as well as an embrace of artificial intelligence.

https://www.c4isrnet.com/battlefiel...equest-has-billions-for-advanced-networks-ai/

Simultaneously, autonomous weapons systems are getting more and more sophisticated. Besides the capabilities emerging from AI, which controls them, improvements in materials science, and nano-scale engineering, are set to be a force multiplier. Carbon based transistors and 3D printed nano-scale carbon based circuitry will enable much much cheaper, smaller, and energy efficient, autonomous robots to house large amounts of compute power and memory.



At the same time, countries like Russia are aggressively seeking the power offered by AI as a means to defeat their adversaries. This has been true, but in more recent times has become even more serious.

In 2017, Russian President Vladimir Putin declared that whichever country
becomes the leader in artificial intelligence (AI) “will become the ruler of the world."

https://sites.tufts.edu/hitachi/files/2021/02/1-s2.0-S0030438720300648-main.pdf

Ex google CEO Eric Schmidt has been tasked by government to help develop plans and guidelines for matching the external AI threats from other nations and has co-created the influential Special Competitive Studies Project (SCSP), aimed to "make recommendations to strengthen America’s long-term competitiveness for a future where artificial intelligence (AI) and other emerging technologies reshape our national security, economy, and society".

https://www.theverge.com/2016/3/2/11146884/eric-schmidt-department-of-defense-board-chair

In fear of losing a competitive AI arms race against China, the US government has embraced a self-regulation model. However, since the changes to the landscape, this approach is being rethought with an unprecedented number of people weighing in. Some, such as Max Tegmark, point out that China is more aggressive at AI regulation, because like us, they don't want even their own AI to undermine their own power, and control. He argues it isn't just a race to be more powerful, it is also a suicide race. Max Tegmark, referencing Meditations On Moloch, by Scott Alexander, names this force that puts us in a mutually destructive race that we can't stop, as Moloch, and discusses it more generally as a primary foe of humanity, and makes a case for optimism.



All of these issues are only a sampling of those which we can anticipate. Connor Leahy of Conjecture, says,

There is a big massive ball of problems coming at us, and this whole problem, this whole sphere of problems is so big, that it doesn't fit into anyone's ideological niche cleanly. It doesn't fit into the story of the left, it doesn't fit into the story of the right.
...
It doesn't fit into the story of, anyone really. Because it's like, fundamentally, not human...



Yuval Noah Harari has focused on existential threats, namely, human irrelevance or feelings of irrelevance.

https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/

Here are a few interesting interviews, to help better understand some of the landscape. The already linked interviews are worth watching as well.





CHATGPT + WOLFRAM - THE FUTURE OF AI!​

https://www.youtube.com/watch?v=z5WZhCBRDpU

The dangers of stochastic parrots.
https://www.youtube.com/watch?v=N5c2X8vhfBE

Timnit Gebru explains why large language models like ChatGPT have inherent bias and calls for oversight in the tech industry
https://www.youtube.com/watch?v=kloNp7AAz0U

So what are your thoughts? Who do you agree with, or disagree with, and how so? Is there anything else important that you think hasn't been addressed?
 
Last edited:
Computer science news on Phys.org
  • #2
A society that decides to give control of automobiles, airplanes, nuclear power plants etc. to sonmething as smart as a flatworm deserved what it gets.
 
  • Like
  • Haha
Likes physicsworks, russ_watters and jedishrfu
  • #3
Years ago, there was a movie called Colossus: The Forbin Project, where an intelligent computer takes over the world and merges with another one from Russia named Guardian.

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

There was a remake planned, but I think Colossus killed it. Wil Smith was slated to star in it, with Ron Howard directing.

Of course, there's the I Robot movie starring Wil Smith with another computer taking over the world and using an army of robots to enforce the order.

https://en.wikipedia.org/wiki/I,_Robot_(film)
 
  • #4
Vanadium 50 said:
A society that decides to give control of automobiles, airplanes, nuclear power plants etc. to sonmething as smart as a flatworm deserved what it gets.
To understand your point, you need to help me understand the flatworm metaphor. You can't ask a flatworm in natural language to build you a sophisticated system of malware and to give you step by step directions how to use it effectively. A flat worm can't digitally clone your voice and likeliness. A flat worm can't do your job. Flatworms aren't evolving at rapid pace.

So what is flatworm intelligence? How are you measuring it? How are you measuring artificial intelligence? How are you measuring future artificial intelligence? How are you extrapolating to derive the flatworm limitation/bottleneck? What are your assumptions?

Next, what about Tegmark's discussion on the "Moloch" concept?
 
Last edited:
  • #5
jedishrfu said:
Years ago, there was a movie called Colossus: The Forbin Project, where an intelligent computer takes over the world and merges with another one from Russia named Guardian.

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

There was a remake planned, but I think Colossus killed it. Wil Smith was slated to star in it, with Ron Howard directing.

Of course, there's the I Robot movie starring Wil Smith with another computer taking over the world and using an army of robots to enforce the order.

https://en.wikipedia.org/wiki/I,_Robot_(film)
The Joint All-Domain Command and Control project seems to be a step in this direction, similar to how the scifi stories tend to begin.

Joint All-Domain Command and Control or JADC2 is the concept that the Department of Defense has developed to connect sensors from all branches of the armed forces into a § unified network powered by artificial intelligence.

https://en.wikipedia.org/wiki/Joint_All-Domain_Command_and_Control

As already mentioned, Putin (in 2017) predicted that the leaders in AI will rule the world.

This tension is part of the basis for the "Moloch" concept in this particular domain. Ordinarily, we may not want to turn over control of our armed forces to AI. But if Putin is doing it, then we will feel as though we better do it as well, or risk losing a war. And the same the other way around.

Since this escalation is, or should be, equally terrifying to both parties, it would make sense for cooperation to defuse a mutually destructive threat. But can we do that? It would mean defeating Tegmark's "Moloch".

If our AI systems that operate our defense forces can collaborate to help us defeat Moloch, that would be interesting.
 
  • #6
I think you should take a break from watching those depressing videos. (You posted 12 hours' worth of it!)

Listening to parts of the videos posted, I heard a lot of the word "intelligence" thrown around, especially how we're close to creating something "smarter" than humans. (Whatever "smarter" means.)

I'm sorry, but I still fail to see any sort of intelligence in the AI presented so far. I showed in that post how the Google search engine can give answers similar (if not more amazingly precise) to what ChatGPT gives. Only the wording differs. I still believe ChatGPT is a glorified search engine.

So AI can make nice drawings. This has been done for at least hundreds of years and society hasn't collapsed because of it. When somebody painted a king, everybody thought it was a true representation of the king. After all, the painter was a witness. And it looks so much like a real human being. How many people realized the painter could embellish the image? Showed the king in situations that were not representative of reality?

Before that - and still today - how many stories were told and the final story might have been somewhat different from the actual events? Lots of people were fooled by heroic (or evil) actions they were told happened, but how many understood that doubt might be in order?

I failed to see what is new with this latest tool that is AI. Writing with ink & quill or with a typewriter, in the end, it's still writing, only faster.

The only real threat about AI is people that not only think there is intelligence in AI but that they think it is superior to ours and somehow decide to leave important decision-making to these machines. Even if that happens, I don't think it will last long.

And don't even get me started on the robots doing all the jobs for humans. It will never happen. We already have an unimaginable amount of automation and humans have never worked as much as we do today.

Jarvis323 said:
Ordinarily, we may not want to turn over control of our armed forces to AI.
Why on Earth would anyone want to do that?

Can you imagine turning control of your armed force to AI and the AI decides to NOT go to war, to NOT be a threat? There is no way a control freak of an authoritarian regime will concede such power.

Oh, and Putin is not a valid or serious reference.
 
  • Like
Likes Vanadium 50, russ_watters and TeethWhitener
  • #7
jack action said:
I think you should take a break from watching those depressing videos. (You posted 12 hours' worth of it!)
It's a serious topic. There is background, just like any other topic, including physics. If someone jumps into a discussion on quantum mechanics without having learned quantum mechanics, its unlikely their take will be useful or contribute anything interesting. In the same way, 12 hours of video is not really enough, but better than nothing, to provide some background on what is going on in AI. In reality, if you want to understand any one of the interviewees technical work, you need to go much much deeper.

jack action said:
Listening to parts of the videos posted, I heard a lot of the word "intelligence" thrown around, especially how we're close to creating something "smarter" than humans. (Whatever "smarter" means.)

I think Connor Leahy's take, that intelligence to him is more of an observational property, is the most pragmatic.

jack action said:
I'm sorry, but I still fail to see any sort of intelligence in the AI presented so far. I showed in that post how the Google search engine can give answers similar (if not more amazingly precise) to what ChatGPT gives. Only the wording differs. I still believe ChatGPT is a glorified search engine.

A search engine can't really do any of the things that people are worried about. It might work as a metaphor. And you could apply that metaphor to human beings as well if you wanted. It isn't really accurate from a technical or functional perspective. So, I will just say that if you want to use the metaphor effectively, you need to dig into the details, and try to really get across the objective points that you think it captures. Also, you need to consider how such a metaphor is interpreted by other people, given the level of ambiguity. You are relying on some common intuition that others may be missing. At best, this common intuition could only really be achieved if your audience were to experiment with these models enough, as well as learn their technical details, so that they can intuitively understand them enough to relate the metaphor.

jack action said:
So AI can make nice drawings. This has been done for at least hundreds of years and society hasn't collapsed because of it. When somebody painted a king, everybody thought it was a true representation of the king. After all, the painter was a witness. And it looks so much like a real human being. How many people realized the painter could embellish the image? Showed the king in situations that were not representative of reality?

Observation of the course of AI improvement at generating art and images, is a useful means for understanding the trajectory. Images are somehow easier to just look at and understand. We can tell when an image is photorealistic and matches the prompt that generated it. But with text, or war game strategy, or whatever else, it is less obvious where we are at, and where it is headed. And it's useful, because the same sorts of models, and learning algorithms are applied across the board, anywhere you can digitize inputs, define objective functions, and acquire enough/good enough data. If your search engine metaphor is right, you'd want to say generative art or images, or video, or voice cloning, or deep fakes, etc., are also search engines. The metaphor does make sense on a certain level, computation in general is sometimes thought of abstractly as search. You could argue that all possible computational methods are search. But it isn't particularly helpful, because we are more interested in observation, capability, and other practical considerations.

That said, the ability to clone a persons voice and image, has obvious malicious applications, as we've already seen.

jack action said:
Before that - and still today - how many stories were told and the final story might have been somewhat different from the actual events? Lots of people were fooled by heroic (or evil) actions they were told happened, but how many understood that doubt might be in order?

Fair enough. But it's no excuse to ignore it. We may end up being wrong about quantum physics, and climate change, and dark matter, the dynamics of tectonic plates, and everything else we are uncertain about.

jack action said:
I failed to see what is new with this latest tool that is AI. Writing with ink & quill or with a typewriter, in the end, it's still writing, only faster.

It's not just writing though.

jack action said:
The only real threat about AI is people that not only think there is intelligence in AI but that they think it is superior to ours and somehow decide to leave important decision-making to these machines. Even if that happens, I don't think it will last long.

This assumes people outperform, and always will outperform AI at all important decision making tasks. What is the basis for this assumption? It seems already proven false in many instances.

jack action said:
And don't even get me started on the robots doing all the jobs for humans. It will never happen. We already have an unimaginable amount of automation and humans have never worked as much as we do today.
There is a lot more convincing you'd need to do in order to have a strong point. There is no doubt a phase change that has occurred in this space over the last few years. It wasn't long ago that computers couldn't write sophisticated code. Now GPT can code at a high level and millions of times faster than any human could. And this is based on fast evolving 5 year old technology. What is needed is an argument, that gets into the math, and shows how the job market will be able to adapt as fast as AI disrupts it. It may be possible (it seems possible), but that doesn't imply it will happen, and make no mistake, this is uncharted territory.

jack action said:
Why on Earth would anyone want to do that?
Because they can, or because of Moloch.

jack action said:
Can you imagine turning control of your armed force to AI and the AI decides to NOT go to war, to NOT be a threat? There is no way a control freak of an authoritarian regime will concede such power.
First, we, and other's are already building this technology. Second, the AI doesn't need to be some arbitrary loosely thinking anthropomorphized entity with human like free will. It is designed for purposes. In this particular case, the purpose is war, and success in that regime is what it will be optimized for. As such, as the autonomous decision making systems (or network) and agents involved, are introduced into this domain, oversight will of course be a goal. But, humans are also unable to comprehend the patterns that AI is able to make use of, the decision making processes of large neural networks, or think at the necessary level of parallelism, and at the pace, that is necessary to be in the loop without sacrificing performance. How much performance is sacrificed in favor of oversight is a tradeoff, and technical challenge to optimize. The parameters of the tradeoff also change. We can't even easily understand the could be effects of the current level of technology, let alone what we will be dealing with next year, or even more 20 years. Moloch will push for less oversight, in favor of more power, so that we can match the power of our opponents. Observed reliability of AI systems at making important decisions could lead to difficult questions about whether AI should be trusted in certain cases, and whether AI needs the oversight, or humans need AI oversight.

jack action said:
Oh, and Putin is not a valid or serious reference.

What he says, has an impact. If he thinks, or even bluffs, that AI superiority will determine who controls the world, then that message plays its role in the embodiment of Moloch. But it isn't just him saying it, it's also people at the defense department, the CEOs of big tech companies, and more generally, the majority of the people taking this subject seriously. All indications would suggest that Russia is aggressively seeking AI advantages, as is China, and the US.
 
Last edited:
  • #8
Let's start with what AI is. Here is a [deep] technical article about how ChatGPT works. It was introduced first in this thread.

This is reality. When you understand how it works, you understand that a neural network is a huge and very powerful probability & statistics analysis machine. Humans can make decisions based on their results, but the machine itself cannot.

When someone starts saying stuff like "smarter than human" - I don't care what his credentials are - we're talking pure science-fiction. It is exactly as when these same people talk about "aliens we could communicate with".

These machines cannot do anything that wasn't pre-programmed by their makers, like any other machine we already invented. ChatGPT spits out letters one after another without understanding the meaning of them, that's it. With Dall-e, it only spits out pixels. We are very far from machines taking over the world.

That being said, as I said before, why would anyone want to create a machine that can think by itself and then give it power over his or her own life? Even as humans giving power to another human (politics), it still goes very bad for the leader if people don't want to follow anymore. If such a machine ever exists, I doubt it will do better on this front.

Reality now. What can AI do for war? It can only enhance the weapons we already have. Not more powerful, just more precise, more effective. Let us imagine a realistic scenario. Hundreds of thousands of drones flying into cities killing any human beings they meet without destroying any infrastructure. That is scary. It's actually the same scenario as biological warfare that is more realistic than AI today. But is that scarier than the nuclear war threat hanging over our heads for the last 60-70 years? And just like the nuclear war threat has us making missiles destroying nuclear missiles before they arrive, we can make drones that would be destroying drones before they arrive. And we're back to step one where nobody does anything. That is not a technology-related problem: It's a pure diplomatic problem, the same one that has been present since the creation of societies. I laugh in the face of Moloch.

Sure, there might be some lunatic in North Korea with a finger on the red button, just wanting to push it for the fun of it. But we already have that too.

What is scarier with AI than what we don't already have? The more you worry about it, the worst it becomes.

Jarvis323 said:
That said, the ability to clone a persons voice and image, has obvious malicious applications, as we've already seen.
Malicious intentions already exist. The scariest thought is for people already manipulating people with "voices and images" that won't be able to do it anymore because nobody will believe what is presented to them. Imagine that, people will have to understand on their own instead of relying on others' judgment presented in a 30-second clip. Heaven!

Jarvis323 said:
But it's no excuse to ignore it. We may end up being wrong about quantum physics, and climate change, and dark matter, the dynamics of tectonic plates, and everything else we are uncertain about.
There is an excuse to ignore it: it is science-fiction. We are talking about a machine that doesn't exist and that we have no clue how it would work if it does exist one day. Trying to prepare for that is impossible and futile.

Jarvis323 said:
This assumes people outperform, and always will outperform AI at all important decision making tasks. What is the basis for this assumption? It seems already proven false in many instances.
How do you qualify "outperform"? If there is a machine that comes to the conclusion "kill your neighbor" or "give your car to your neighbor", did it outperform you if that is the opposite of the conclusion you came up to?

Jarvis323 said:
What is needed is an argument, that gets into the math, and shows how the job market will be able to adapt as fast as AI disrupts it. It may be possible (it seems possible), but that doesn't imply it will happen, and make no mistake, this is uncharted territory.
This is far from uncharted territory. Try building a car by yourself. Do everything from mining the ore to the finished product. It is just impossible to do in one's life. Yet, everybody has a car because we created organized work and automation to make our work more efficient. Yet, as I said before, we work more compared to societies where nobody had cars. We work more compared to societies who only work for food and simple housing:
https://en.wikipedia.org/wiki/Working_time#Hunter-gatherer said:
Since the 1960s, the consensus among anthropologists, historians, and sociologists has been that early hunter-gatherer societies enjoyed more leisure time than is permitted by capitalist and agrarian societies; for instance, one camp of !Kung Bushmen was estimated to work two-and-a-half days per week, at around 6 hours a day. Aggregated comparisons show that on average the working day was less than five hours.

Subsequent studies in the 1970s examined the Machiguenga of the Upper Amazon and the Kayapo of northern Brazil. These studies expanded the definition of work beyond purely hunting-gathering activities, but the overall average across the hunter-gatherer societies he studied was still below 4.86 hours, while the maximum was below 8 hours. Popular perception is still aligned with the old academic consensus that hunter-gatherers worked far in excess of modern humans' forty-hour week.
One thing I'm sure of based on past experiences: AI will give us more work.
 
  • Like
Likes russ_watters
  • #9
jack action said:
Let's start with what AI is. Here is a [deep] technical article about how ChatGPT works. It was introduced first in this thread.

This is reality. When you understand how it works, you understand that a neural network is a huge and very powerful probability & statistics analysis machine. Humans can make decisions based on their results, but the machine itself cannot.

It can be thought of as a statistical analysis machine. But, even so that would be statistical analysis in a very high dimensional space, where correlation structure can become so complex that we would not recognize it as the kind of statistical analysis we are used to intuiting after having studied statistics in college.

Also, Wolfram himself seems to recognize the non-triviality, uncertainty in what is going on in the training process and what is being learned, how it is able to be learned as effectively as it is, and so forth, and overall doesn't seem to come to the same kind of conclusions as you.

It should be noted that Wolfram's article is an oversimplification, or overly simplistic treatment of various facets of the system, and doesn't attempt to really go into theory on various things, like the instruct training and RLHF, which are part of what makes ChatGPT different than GPT-3, and what can't be chalked up to next token prediction and simple concepts of statistical analysis. It also doesn't address chain-of-though. And a lot of the training strategy that went into ChatGPT isn't even open. Rather than an in depth technical analysis of ChatGPT, it is more like a condensed introduction to language models like GPT for the layman.

In the real world, it is difficult to really say this is just statistical analysis. There is no statistical theory that we base it on. This is emergent results from extremely high dimensional back propagation. Sure it is based on prediction errors, and the models are optimized for success in prediction.

But optimizing to better predict something isn't exactly always something trivial.

I don't understand what you mean when you say machines cannot make decisions based on their results. You can get GPT to do this if you want to, unless I'm misunderstanding.

But anyways, I don't see how it matters much. None (or at least little) of the things that I, or really any of the cited experts, argued depends much on whether GPT is anything more than what it is obviously technically, whether it is conscious or self aware, or whatever else, as far as I can tell. It can write sophisticated malware, it has all kinds of general text generation capabilities with all kinds of applications. Some or those capabilities represent increased threats as discussed. These are indisputable facts.

jack action said:
When someone starts saying stuff like "smarter than human" - I don't care what his credentials are - we're talking pure science-fiction. It is exactly as when these same people talk about "aliens we could communicate with".

I don't see why. Can you beat AI at chess or go, or at a video game, or at protein folding, or drawing? How do we need to define intelligence so that a human always wins in every category?

jack action said:
These machines cannot do anything that wasn't pre-programmed by their makers, like any other machine we already invented. ChatGPT spits out letters one after another without understanding the meaning of them, that's it. With Dall-e, it only spits out pixels. We are very far from machines taking over the world.

They weren't programmed to do anything by anyone, unless that is a metaphor for learning from data based on objective functions and back propagation. The crucial thing is that there is no solid requirement at all for the developers to understand the data, the tasks, the algorithms or heuristics the machine learns. You could speculate that this approach is limited by the data, and if it is purely human data, then it may be limited by human intelligence. But it isn't clear if this is true, and if so, how limited this looks like when it has everything humans have published to learn from. I don't know if I have exceeded all humans that ever published anything, at anything. probably not, and same for you. So maybe me and you could still become irrelevant as far as intelligence goes, even if learning existing human intelligence is really the bottleneck of the current generative text approach.

Are you better than everyone at something useful?

You should consider also, that, with machine learning, we can feed it with data humans don't understand, and have it do all kinds of things humans are terrible at.

jack action said:
That being said, as I said before, why would anyone want to create a machine that can think by itself and then give it power over his or her own life? Even as humans giving power to another human (politics), it still goes very bad for the leader if people don't want to follow anymore. If such a machine ever exists, I doubt it will do better on this front.

Because that is the type of thing (at least some) people, just want to do. I and others think some people willing to do so can gain a competitive advantage in a race they feel they can't afford to lose. Moloch does the rest.

jack action said:
Reality now. What can AI do for war? It can only enhance the weapons we already have. Not more powerful, just more precise, more effective.

What do you mean more effective, but not more powerful? Do you think anything in the video clip is particularly outlandish? Or do you think the possible future they warn about isn't worrisome for any particular reason?

jack action said:
Let us imagine a realistic scenario. Hundreds of thousands of drones flying into cities killing any human beings they meet without destroying any infrastructure. That is scary. It's actually the same scenario as biological warfare that is more realistic than AI today.

Both are realistic, but the slaughter bots are more precise, they can be programmed to be selective. They don't attack indiscriminately. Have you watched the video I posted in the OP?

A better comparison would be some future kind of bioweapon that can be programmed to target specific people. That doesn't really exist now, at least not anywhere close to at the level of AI, which in addition to precision targeting, can also perform target selection.

jack action said:
But is that scarier than the nuclear war threat hanging over our heads for the last 60-70 years?

Yes, to many, because of a lot of reasons that are already present in the OP.

jack action said:
And just like the nuclear war threat has us making missiles destroying nuclear missiles before they arrive, we can make drones that would be destroying drones before they arrive.

Sure, but this is complex when you are trying to do this with tiny drones that don't even need to communicate, can be released from anywhere, are made using basic parts or can be 3D printed, can't be easily tracked in terms of proliferation, manufacturing, or transportation, can independently take intelligent evasive action, etc. How hard will it be to avoid false positives such as bees, birds, or whatever? As we've discovered, we don't even like setting the sensitivity of our detectors to the level where random fairly large balloons get detected, due to the annoyance of false positives. Do we need a massive dome shaped swarm or robots constantly patrolling and independently acting to destroy potential threats. It may be possible that such an approach would work to protect some people, but unlikely to work for the whole world. And the first time such an attack is made, such a system wouldn't likely have been deployed yet.

jack action said:
And we're back to step one where nobody does anything. That is not a technology-related problem: It's a pure diplomatic problem, the same one that has been present since the creation of societies. I laugh in the face of Moloch.

War is usually a diplomatic problem or some other problem, but it sill happens. Hitler still tried to take over the world and kill a ton of people. It would be hard to engage in diplomacy with Hitler or even justify that. And if Hitler had slaughter bots, I'm pretty sure he would have used them and I doubt I would be here now.

We can be optimistic. I actually provided a link to part of Tegmark's arguments for optimism in defeating Moloch.

jack action said:
Sure, there might be some lunatic in North Korea with a finger on the red button, just wanting to push it for the fun of it. But we already have that too.

Exactly. But the button they have now inevitably results in their own destruction. Maybe slaughter bots are the same ultimately, but not directly like biological warfare or nuclear weapons that cause nuclear winter. Again if you watch the short video, it might clarify some of the concerns.

Nevertheless, that we already have nuclear weapons isn't very strong argument for why autonomous weapons aren't worth worrying about.

I wouldn't tell you not to worry about the tornado because we already have shark attacks. And I wouldn't tell you not to worry about sharknados, if they existed.

OK, maybe it shows that we have thus far weathered one threat, for a whole 70 or so years and counting, so why not the other. Or maybe AI+nuclear weapons just increases the threat all together. For example, a terrorist releases a slaughter bot swarm and a super power retaliates against another with nuclear weapons.

We can ask, what if 6 months from now, Ukraine releases a swarm that destroys the entire Russian army in a day? What will be the likely response?

jack action said:
What is scarier with AI than what we don't already have?

The video clip covers some of this.

jack action said:
The more you worry about it, the worst it becomes.

Maybe that could be true, if your worry causes an escalation. But the cat is out of the bag. We also need to just be aware of things in order to be able to deal with them. Einstein may have been subject to wonder for his whole life whether writing a letter warning of a dangerous weapon was the right choice. He can never know what the alternate future had in store.

Was the level of public awareness, regular nuclear apocalypse drills, simulations, videos, etc helpful or harmful in the long run for motivating nuclear arms controls and deescalation?

I think awareness, even in the face of fear or panic, or escalation, is probably worthwhile. Because eventually we need to be aware of the problem to solve it. And world powers have already been worrying about it for a long time anyways. We are late to the party. Even Elon Musk has been tweeting about these things by now.

The case where Moloch is fueling the escalation, and also keeping us from talking about it, seems the worst.

jack action said:
Malicious intentions already exist. The scariest thought is for people already manipulating people with "voices and images" that won't be able to do it anymore because nobody will believe what is presented to them.

Maybe. It is probably a matter of time before we are bombarded with such threats, extortion schemes, and advanced social engineering, to the point where we just ignore it altogether. Still it is worthwhile at the moment to be aware that we may soon begin getting face time calls from fake people that seem real, intelligent, and may be imitating someone you know as a means to extort you or manipulate you.

jack action said:
There is an excuse to ignore it: it is science-fiction. We are talking about a machine that doesn't exist and that we have no clue how it would work if it does exist one day. Trying to prepare for that is impossible and futile.

It absolutely is not science fiction. Some part of the massive ball of problems are future problems, some are near term problems, many are current problems.

jack action said:
How do you qualify "outperform"?

By whatever measure of success you choose. For example, AI outperforms humans in various games, based on the win/loss statistics. AI outperforms humans at object recognition in images, based on statistics measuring success at guessing the right answer. It always depends on the task.

jack action said:
This is far from uncharted territory.

This is one person's opinion. Obviously I don't agree for a large number of aforementioned reasons.

jack action said:
Try building a car by yourself. Do everything from mining the ore to the finished product. It is just impossible to do in one's life. Yet, everybody has a car because we created organized work and automation to make our work more efficient. Yet, as I said before, we work more compared to societies where nobody had cars.

Maybe you're successfully extrapolating the story of the industrial revolution to guess what will happen in the dawning era of advanced AI, but it doesn't sound particularly convincing. I have said before, I don't think AI causing joblessness and economic disaster is at all inevitable. It is just what it looks like would happen without taking into account adaptation. But it still remains to be seen how exactly we would adapt so that would work out, given the pace AI is improving
and how slow we have been to adapt historically.

jack action said:
We work more compared to societies who only work for food and simple housing:

One thing I'm sure of based on past experiences: AI will give us more work.

We already work 8 hours a day, 5 days a week. Do we really want more work? What is wrong with deciding to work less if people want to? What about people who want to do certain tasks which they may underperform in compared to AI, such as creating art or music, or coding? Is it enough to have the artist and the coder alike be satisfied with prompt engineering and high level efforts? I like to code, I don't want to have to ask a chat bot to do it for me to keep up and have a job. Maybe that is unfair, some people want to be involved in creating things without actually doing the low level work. Near the end of Tegmark's interview he discusses this issue, of why should we replace ourselves? Of course your boss probably doesn't care about what you want. But in general, why can't we choose what we want to do, even in an era where machines can do human jobs better than humans?
 
Last edited:
  • #10
Thread is closed for Moderation.
 
  • #11
This thread will remain closed as it is based on unacceptable references.
 
  • Like
Likes russ_watters

Similar threads

  • Computing and Technology
Replies
24
Views
2K
Replies
8
Views
799
  • Computing and Technology
Replies
0
Views
314
  • Computing and Technology
Replies
2
Views
717
  • Computing and Technology
Replies
16
Views
2K
  • Programming and Computer Science
2
Replies
39
Views
4K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
583
Replies
24
Views
2K
  • Programming and Computer Science
Replies
1
Views
663
  • General Discussion
3
Replies
102
Views
8K
Back
Top