Destined to build a super AI that will destroy us?

  • Thread starter Greg Bernhardt
  • Start date
  • Tags
    Ai Build
In summary: I'm going to try and make this thing understand what I'm thinking." If we're building this machine, we should make sure that it can't diverge from our wishes.
  • #36
Greg Bernhardt said:
Even at a time scale as far out as 10,000 years from now? Perhaps that is where fantasy comes into play? I tweeted Sam Harris this thread. With some luck he will have a bit to say.

If mankind, augmented by his machines, becomes unrecognizable, are we destroyed or enhanced? The Borg collective comes to mind.

Mark Twain might have considered today's connected youth as being assimilated by the Internet.

If an intelligence greater than mankind's decides that humans should be killed, isn't that the best decision by definition?

Define civilization. Define destroyed. Define us and them. Define super AI.

Without agreements in advance about definitions such as these, any debate is silly.
 
  • Like
Likes Boing3000 and Greg Bernhardt
Technology news on Phys.org
  • #37
anorlunda said:
If an intelligence greater than mankind's decides that humans should be killed,

I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?

Not only we don't say that, we are so SMART that we KNOW that we NEED them for us to exist, even if they are not as intelligent as we are (intelligence count for very little in the survival equation).

Now, why would an even smarter machine or life form would think otherwise?

And if somebody tells me that humans are the scums of the Earth that don't deserve to live, that is a very human thing to say. No other (dumber) life form think that about themselves. Following that logic, machines or aliens that are smarter than us would probably blame themselves even more, which would lead to self-destruction?!?
 
  • Like
Likes Boing3000
  • #38
jack action said:
I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Not so crazy. Considering the finite non-renewable resources on this planet, it could be argued that it would be intelligent to decide to cap human global population at 7 million rather than 7 billion. Once decided, it would also be intelligent to act on that decision immediately because each hour of delay further depletes the resources remaining for the surviving 7 million.

jack action said:
Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?

Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
 
  • #39
anorlunda said:
Did you forget that we did decide to make the smallpox virus extinct? Or that we are actively considering doing the same for disease carrying mosquitoes?
I can't speak Jack Action but I would say the motivation to rid ourselves of smallpox and disease carrying mosquitoes is to improve human life. Apparently there has been something seriously missed in the search for extra terrestrial intelligence if humans are causing problems for alien life.
 
  • #40
Filip Larsen said:
1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology.
But you still haven't provided us with any clues as to why that is a risk. As far as normal people are concerned (those not having an intimate relationship with Maxwell equations, or Quatum field theory (that is 99.99999% of humanity, including me) a simple telephone is "magic". A cell phone even more, there is not even a cable !

If (that is a big "if", not supported by science in any way whatsoever) a supper AI pops into existence and as far as we are concerned we call it Gandalf, because it does "magic", what is the risk ? Please explain. What is good, what is not. Who dies, who does not.

Filip Larsen said:
as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks.
But there is no risk. I mean not because there AI don't exist, nor because progress is not a exponential quantity. The reason there is no risk is because you have NOT explains any plausible risk.
"Politics" is nowadays what we "surrender" most of our decision making. Is it good, is it bad ? What "risk" is there ? What do be gain, what do we loose ?
All these have been explored is so many different way in so many fantasy book (Asimov comes to mind). None of it is science. That does not mean that is not interesting. The more "intelligent" of those novel are not black and white.

Filip Larsen said:
But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
I am not sure what "risk mitigation" means. But as a computer "scientist", I know that computers aren't there to harm us, most often, it is the other way around (we give them bugs and viruses, and force them to do stupid things, like playing chess, or display cats in high definition)

Filip Larsen said:
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company.
I cannot even begin to follow you. Am I forced to buy some of your insurance and build some underground bunker, because someone on the internet is claiming that doom is coming ?. I don't meant mean real doom, like climate change, but some AI gone berserk ? Are you kidding me ?

Filip Larsen said:
Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.
That's a non sequitur. A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.

Filip Larsen said:
Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers.
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.
Secondly, no Flops nor computer design are an ever increasing quantity. We are still recycling 70'th tech, because it is just still about move store and add, sorry.

Filip Larsen said:
You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.
Indeed. But again those limit are not soft at all, as far as Plank is concerned. And again, quantity and quality are not the same thing.

Filip Larsen said:
Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The IBM's True North chip is already a step down that road.
But that's MY point ! A very good article. Actually geneticist are much more close to build a super brain that IBM. So what ? What are the risk, and where is the exponential "singularity" ? Are you saying that such a brain will want to be bigger and bigger until it as absorbed every atom in earth, then the universe ?
I am sorry, but I would like to know on what scientific basis this prediction is based on. The only thing that does that (by accident, because any program CAN go berserk) are called cancer. They kill their host. We are not hosting computers. Computer are hosting program.

Filip Larsen said:
Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.
That's just false. Power consumption of data center are already an issue. And intelligent wise, those data center have the IQ below that of a snail.
You can also add up 3 billion of average "analogic" people like me, it would still not make us anywhere close to Einstein intelligence.

Filip Larsen said:
If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
Of but I agree, the problem with arguments is that I would like them to be rooted in science, not in fantasy (not that you do, but Sam Harris does, and this thread is a perfect place to debunk them)

We seem to agree that computing power (that is not correlated with intelligence at all) is limited by physics.
That is a start. No singularity anywhere soon.
 
  • #41
  • Like
Likes anorlunda
  • #42
To me the danger does not lie so much in the possibility of one super intelligent computer taking over the world, which I think highly unlikely, but rather in a creeping delegation of decision making to unaccountable programs. Whether these programs are considered intelligent or not is immaterial - we already have very widespread use of algorithms controlling for example share and currency trading. Yesterday the sharp drop in the value of the British pound was at least partly blamed on these. Most large companies rely of software systems of such complexity that no individual understands every aspect of what they do, and these systems automatically control prices, stocking levels and staffing requirements. In a manner of speaking these systems are already semi-autonomous. They currently require highly skilled staff to set up and maintain them, but as the systems evolve it is becoming easier to use 'off the shelf' solutions which can be up and running with little intervention.

While a full takeover might seem implausible, economics will continue to drive this process forward. A factory with automated machines is more cost efficient than a manual one. Call centres are becoming increasingly automated with routine queries handled by voice recognition systems. It seems likely that (in at least some places) taxi drivers will be replaced by autonomous vehicles.

As these systems become more resilient and interconnected it is not inconceivable that an entire company could be run by an algorithm, relying on humans to perform some tasks, but with the key decisions driven by the 'system'. It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
 
  • Like
Likes Filip Larsen
  • #43
Charles Kottler said:
It the goal of such a company (as is likely) is to make the most profit, why would anyone think that in the long term the decisions made would be in the best interests of human society?
What makes you think decisions are made in the best interest of society right now with actual people in charge?
 
  • Like
Likes Bystander
  • #44
Averagesupernova said:
What makes you think decisions are made in the best interest of society right now with actual people in charge?

Fair point.
 
  • #45
jack action said:
I always find that kind of question bizarre: Why would anyone - machines or aliens - «decide» to get rid of humans, especially because we would be of «lower intelligence»?

Are we saying as human: «Let's get rid of ants, hamsters and turtles, they're so dumber than we are»?
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.

It's plausible that you or I have killed the last remaining member of a species, we wouldn't give it a single thought.
I think it's easy to imagine how AI could treat humans with the same complete indifference. I have no hesitations wiping out an entire colony of social, higly organised creatures (ants) for a largely insignificant improvement in my environment.
 
  • #46
billy_joule said:
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates.
Source?
 
  • #47
Greg Bernhardt said:
Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?

Seriously?

To me this talk does qualify as FUD and fear mongering. It accumulates so many cliché it is embarrassing to see it on TED.

When did Sam Harris become a noted expert on AI and the future of society? AFAIU he is a philosopher and neuroscientist, so what is his expertise on that matter? I'm no expert myself but have worked for thirty years in software engineering and kept an eye on AI research and applications over the years... and it does not seem to me he knows what he is talking about.

He seems to think that once we are able to build computers with as many transistors as the number of neurons in the human brain, AGI will just happen spontaneously overnight! ...And then we loose control, have a nuclear war and end up with starving child everywhere! Comparing the advent of AI with aliens coming to Earth one day is laughable at best. Making fun of the actual experts is questionable to say the least... Using scary and morbid cartoon style visuals is almost a parody.

A lot of speculations, very little demonstration, misinformation, over simplification, fear inducing images, disaster, digital apocalypse, aliens, god... and the final so 'new age' namaste for sympathy. Seriously?

He is asking disturbing questions nonetheless and i agree we should keep in mind worst case scenarios on our way. However, although caution and concern are valuable attitudes, fear is irrational and certainly not a good mind frame to make sound assessments and take inspired decisions.

TED is supposed to be about "Ideas worth spreading". I value dissenting opinions when they are well informed, reasonably sound and honest. This talk is not.

The future of AI is a very speculative and emotionally charged subject. To start with I'm not sure there is a clear definition of what is AI or AGI. What it will look like. How it will happen. How we know we have created such a thing... Even if technical progress keeps pace with the Moore law that's just the hardware part and we still don't really know what the software will look like... Maybe AI will stall at some point despite our theoretical capability and hard work? It's all speculation.

Whatever will happen it won't happen at once. It will likely take a few decades at least and i disagree with Harris about the time argument. Fifty years is a lot of time especially nowadays. A lot will happen and we will have a better understanding of the questions we are asking now. There is no way (and has never been) to solve today all the problems we may face tomorrow or half a century from now.
 
Last edited:
  • Like
Likes Boing3000
  • #48
billy_joule said:
Over 150 species go extinct every single day. Extinction rates due to human action is 1,000 times greater than background rates. We aren't killing then because they're unintelligent, but if they were as smart as us they surely wouldn't die.
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.

Filip Larsen said:
I would very much like to hear serious technical argument (i.e. not ad hominem arguments) on why there is absolutely no (i.e. identical zero) risk of humanity every getting itself lost to technological complexity and AI.

I wanted to go back to this question as I might have one relevant example to further feed the discussion: Humans and cats.

Humans have a tremendous effect on cat population. We feed them, we spayed & neutered them, we declaw them and we kill them. Theoretically, it's not for our own survival, we don't need to do that at all for our benefice. Generally speaking, we can say that we care for them and that humans are beneficial for the cat population survival, even if they are some individuals who kill and/or torture them for research or even just pleasure. For sure, the cat specie is not at risk at all.

What if there was an AI that turns to this for humans? Would that be bad? One argument against it would be lost of freedom; Cats live in «golden cages». Life can be consider good on some aspects, but they cannot do as they wish. But that is not entirely true either. First, they are stray cats that can be considered «free». Lots of drawbacks with that lifestyle as well, sometimes not a chosen one. Sure they have to flee from animal control services, but in the wild you are always running from something.

But the most interesting point I wanted to make about intelligence and using things we don't understand is that cats - just like humans - have curiosity and will power that can lead to amazing things. Like these cats interacting with objects that were not design for them and, most importantly, they could never understand or even build such «complex» mechanisms:




Not all cats can do these things. It shows how individuals may keep a certain degree of freedom, even in «golden cages». It also shows how difficult it is to control life because the adaptability feature is just amazing.

Keep in mind that cats did not create humans, they just have to live with an already existing life form that was «imposed» to them and that happens to be smarter than they are (or are they?).

How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival? How can someone imagine that every single human being will be «lobotomized» up to a point that no ones will have will power to stray from the norm? That seems to go against what defines human living beings.
 
  • Like
Likes Boing3000
  • #49
Bystander said:
Source?

I said over 150 species but that should have been up to 150 species, sorry.

United Nations Environmental Programme said:
Biodiversity loss is real. The Millennium Ecosystem Assessment, the most authoritative statement on the health of the Earth’s ecosystems, prepared by 1,395 scientists from 95 countries, has demonstrated the negative impact of human activities on the natural functioning of the planet. As a result, the ability of the planet to provide the goods and services that we, and future generations, need for our well-being is seriously and perhaps irreversibly jeopardized. We are indeed experiencing the greatest wave of extinctions since the disappearance of the dinosaurs. Extinction rates are rising by a factor of up to 1,000 above natural rates. Every hour, three species disappear. Every day, up to 150 species are lost. Every year, between 18,000 and 55,000 species become extinct. The cause: human activities.
The full reports can be found here:
http://www.millenniumassessment.org/en/Index-2.html

jack action said:
The problem you state is called overpopulation and has nothing to do with intelligence. It is just a matter of numbers. And the species that have an exponential growth is always condemn to stop and regress at one point or another. One specie cannot live by itself.

Humans have come to dominate the globe precisely because of our intelligence. There are many species with greater populations and/or biomass but none can manipulate our surroundings like we can. We aren't condemned to stop and regress like other species, our intelligence has allowed us to increase Earth's human capacity through technology thus far, who's to say how high we can raise that capacity?

Anyway My point was, that on our path to controlling the globe and it's resources we don't look down and consider the fate of the ant, their intelligence doesn't register on our scale, they are of no value or consequence. The gulf between AI and HI could become just as vast and result in a similar outcome.
We may end up like cats, or ants, or we may end up like the dodo.

jack action said:
How can mankind gradually gives the decision process to machines without them ever noticing it going against their survival?
It's happened countless times between smart and stupid humans, and it'll continue to happen. Control through deception is a recurring theme in human history.
If a super AI wasn't capable of large scale deception I would say it's not super at all. Whether we could build it in such as way that it wouldn't is another issue.
 
  • #50
This thread have many questions and lines of argumentation going in many directions at once now. I will try focus on those that I feel connects with my concerns. Again, I am not here to "win an argument", I am here to sort out my concerns so please work with me.

jack action said:
If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological
complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Of the numbers quoted, I would right now say that 90% sounds most likely. Perhaps it easier for me to say it the other way around: I cannot see any reason why we will not continue down this road of increasing complexity. The question then remains if and how we in this thread can agree to a definition of "lost".

Boing3000 said:
But you still haven't provided us with any clues as to why that is a risk.

To establish a risk you commonly establish likelihood and significant cost. Base on what how things have developed last decade or two and combine it with the golden promise of AI, I find both high likelihood and severe costs. Below I have tried to list what I observe:
  • There is a large drive to increase complexity of our technology so we can solve our problem cheaper, faster and with more features.
  • Earlier much of the technological complexity was just used to provide an otherwise simple function (example would be radio voice communication over a satellite-link) which can be understood easy enough. Plenty of the added complexity today introduces functional complexities as well (consider the Swiss army knife our smartphones are) where a large set of functions can be cross-coupled in large set of ways.
  • There is a large drive to functionally interconnect everything, thereby "multiplying" complexities even more. By functionally interconnecting otherwise separate technologies or domains you also often get new emergent behavior with its own set of complexities. Sometimes these emergent behaviors are what you want (symbiosis), but just as often there are a set of unintended behaviors that you now also have to manage.
  • There is a large shift in acceptance of continuous change, both by the consumer and by "mangement". Change is used both to fix unintended consequences in our technology (consumer computers today requires continuous updates) and to improve functionality in a changing environment. The change-cycle is often seen sending ripples our through our interconnected technology making the need for even more fixes.
  • To support the change-cycle there is a shift towards developing and deploying new or changed functionality first and then understand and modify them later. Consumers are more and more accustomed to participate in beta-programs and testing, accepting that features sometimes work, sometimes don't work as they thought they would.
  • Many of the above drives are beginning to spread to domains otherwise reluctant to change, like the industry. For instance, industrial IoT (internet-of-things) which is currently at the top of Garners hype-curve, offers much of the same fast change cycle in the operational management of industrial components. In manufacturing both planning and operations see drives towards more automated and adaptive control where focus is optimizing a set of key performance indicators.
  • There are still some domains, like with safety-critical system, where you today traditionally are required to fully design and understand the system before deployment, but to me it seems very unlikely these domains over time withstand the drive towards increased complexity as well. It will be interesting to see the technological solutions for and social acceptance of coupling a tightly regulated medical device with, say, your smartphone. For instance, a new FDA approved device for diabetes gives an indication what we are already trying to move in that direction (while of course still trying stay in control of our medical device).
All these observations are made with the AI technology we have up until now. If I combine the above observations with the golden promise of AI, I only see that we are driving even faster towards more complexity, faster changes, and more acceptance that our technology is working-as-intended.

Especially the ever increasing features-first-fix-later approach everyone seems to converge on appears to me, as a software engineer with knowledge about non-linear system and chaotic behavior, as a recipe for, well, chaos (i.e. that our systems are exhibiting chaotic behavior). Without AI or similar technology we would simply at some point have to give up adding more complexity because it would be obvious to all that we were unable to control our systems, or we would at least have to apply change at a much slower pace allowing us to distill some of the complexity into simpler subsystems before continuing. But instead of this heralded engineering process of incremental distillation and refinement, we are now apparently just going to throw in more and more AI into the mix and let them compete with each other trying to optimize their part of the combined system in other to optimize on a handful of "key performance indicators". For AI's doing friendly or regulated competition we might manage to specify enough rules and restriction that the they end up not harming or severely disadvantage us humans, but for AI's involved in hostile competition I hold little hope we can manage to keep up.

So, from all this I then question how we are going to continuously manage the risk of such a interconnected system, and who is going to do it?

Boing3000 said:
A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.

So, I guess that if you expect constant evaluation of the technology for nuclear reactors you will also expect this constant evaluation for other technology that has potential to harm you? What if you are in doubt about if some other new technology (say, a new material) is harmful or not? Would you rather chance it or would you rather be safe? Do you make a deliberate careful choice in this or do you choose by "gut feeling"?

Boing3000 said:
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.

This is what I hear you say: "Intelligence is just based on atoms, and if atoms are stupid then more atoms will just be more stupid."

Flops is just a measure of computing speed. What I express is that the total capacity for computing speed is rising rapidly, both in the supercomputer segment (mostly driven by research needs) and in the cloud segment (mostly driven by commercial needs). It is not unreasonably to expect that some level of general intelligence (ability to adapt and solve new problems) requires a certain amount of calculations (very likely not a linear relationship). And for "real-time intelligence" this level of computation corresponds to a level of computing speed.

There is not yet any indication whether or not it is even possible to construct a general AI with human-level intelligence, but so far there is nothing that seems it will be impossible given enough computing power, hence it is interesting for researches to consider the how level intelligence relates levels of computing power.

Boing3000 said:
That's just false. Power consumption of data center are already an issue.

Indeed, and I am not claiming that computing power will increase a hundredfold over-night, just that there is strong drive to increase computing power in general and this, with everything else being equal, will allow for an increase in computational load for AI-algorithms. My bet is that datacenters will continue optimize for more flops/watt, possibly by utilizing specialized chips for specialized algorithms like the True North chip. Also, more use of dedicated renewable energy sources will allow datacenters to increase their electrical consumption with less dependency on a regulated energy sector. In all, there is no indication that the global computing capacity will not continue to increase significantly in the near future.
 
Last edited:
  • Like
Likes billy_joule
  • #51
I don't understand the worry about the possibility that we are overtaken by some AI in a near future. Where by overtaken I mean litterally replaced. If "something" more intelligent than us eradicates us, isn't it "nice" overall? To me, this means we would have achieved our goal. It would mean we created something better adapted and more intelligent than us on Earth. For me this is a big win.
Maybe our specy won't die that quickly and we might become the AI's pets and be well treated.
 
  • #52
@Filip Larsen:

The problem you describe has nothing to do with AI in particular. Because you use the terms «manage» and «control», we agree that we can stop using AI if we want or at least change how we use it.

The question you're asking is rather: «Are we responsible enough to handle [technology of your choice]

The answer is «We are as responsible as we ever going to be.»

Example:

Does anyone knows if the overpasses are built safely? For most of us, we have no clues on how they are built, what is required to built one and we trust that they are all built in a safe manner. Even a civil engineer doesn't study the overpasses he will use before planning a trip; He just put his trust in the people who built them.

10 years ago, one overpass collapse close to where I live. It fell on two vehicles, killing 5 peoples. If you read the Wikipedia article, you will learn that these deaths were the result of a succession of bad decisions that were made over a period of 40 years by different people. All of these bad decisions were based on over-confidence.

No matter how tragic this event was, we still, each of us, use overpasses without fully understanding how they are made. But it is not business as usual either: IIRC, 2 or 3 other «sister» overpasses were demolished soon after the collapse. The government tightened the inspections across the province and dozen of other overpasses and bridges were demolished or had major repairs. The small bridges that were under the care of local municipalities were claimed back by the government. To this day, most people in my province remember this event and think about it whenever going under an overpass: «Will it fall on me?» I'm telling you this because it was the 10th anniversary just a week ago and it was all over the news across the province.

What is important to understand is that we are not slave to things we don't understand. We can react and adjust. Sometimes not as individuals, but rather as a society; but we still don't just sit there, blindly accepting our fate. We always have some control on man-made thing. Will there be bad things that will happen with AI? It is a certainty (your «90%»). Will it become out of control to a point of putting the human specie in jeopardy? It is unlikely (My «0.0001%»).
 
  • Like
Likes Averagesupernova
  • #53
jack action said:
The problem you describe has nothing to do with AI in particular. Because you use the terms «manage» and «control», we agree that we can stop using AI if we want or at least change how we use it.

I am not really sure why you state this again expecting me to answer differently, so I will now just sound like a broken record.

To me, the potential realisation of capable AI seems to be the magic source which will allow us to keep increasing complexity beyond what we can understand and therefore control. Note, that I am not saying that AI is guaranteed to be a harmful thing, or that no good things would come from it, only that while our raft of civilization is drifting down the river towards the promised Elysium it makes sense that those involved with navigation make sure we get there and at least look out for waterfalls.

And regarding control, I would like to ask how much control control you'd say we have of the internet when used for criminal activity. Are you, as a user of the internet, really in control of it? Are the internet organizations? Are the national states? Can any of these parties just "turn off" the unwanted criminal elements or change it so criminals go away? If you think yes to the last one, then why are the criminals still here? And is all this going to be easier or harder if we add capable AI to the mix?

jack action said:
Does anyone knows if the overpasses are built safely?

Yes, we have a very deliberate process of striving towards building constructions that are safe. This does unfortunately not mean that wrong decisions are never made or a contractor will never try to "optimize" their profit in a way that leads to bad results, but we are in general capable of doing proper risk management when building, say, a bridge because we are in general good at predicting what will happen (i.e. using statics and similar calculations) and once built it remains fairly static. And the same degree of rigor is not even remotely present when we are developing and deploying most of the software you see on the consumer market (as software is so qualitatively very different from a bridge), yet this development process is very likely the way we will AI and similar adaptive algorithms in the future.
 
  • #54
Filip Larsen said:
I am not really sure why you state this again expecting me to answer differently, so I will now just sound like a broken record.

To me, the potential realisation of capable AI seems to be the magic source which will allow us to keep increasing complexity beyond what we can understand and therefore control.
It has already been stated that when the machines start doing things above our level of comprehension we will shut it down since the assumption will be that the machines are malfunctioning. This has likely been done many many times already on simpler levels. Consider technicians and engineers scratching their heads when a system is behaving in a manner that is making no sense. Then it occurs to someone that the machine is considering an input that everyone else had forgotten. So in that instance, for a brief moment, the machine was already smarter than the humans. I know it could be argued that somehow somewhere in a CNC machine shop that parts could be magically turned out for some apocalyptic robot or whatever. To me this is no easier than me telling my liver to start manufacturing a toxic chemical and put it in my saliva to be used as a weapon.
 
  • #55
Filip Larsen said:
only that while our raft of civilization is drifting down the river towards the promised Elysium it makes sense that those involved with navigation make sure we get there and at least look out for waterfalls.

All I'm saying is that you should have more faith in humanity that they will. Whether they do it on their own or that they are forced to by some events. I doubt that it will take something fatal to the entire humanity before there are reactions, hence my examples.

Just the fact that you are asking the question, reassure me that there are people like you who cares.

Filip Larsen said:
Are you, as a user of the internet, really in control of it?

Nope. But I'm not in control of the weather or the justice system either and I deal with it somehow.

Filip Larsen said:
Are the internet organizations?

Nope.

Filip Larsen said:
Are the national states

Nope.

Filip Larsen said:
Can any of these parties just "turn off" the unwanted criminal elements or change it so criminals go away?

Nope. But how does this differs from yesterday's reality? Did anyone ever had control on criminality on the streets? Remember when criminals got their hands on cars in the 20's for getaways? How were we able to catch them? Police got cars too.

You may change the settings but the game remains the same.

Filip Larsen said:
And is all this going to be easier or harder if we add capable AI to the mix?

IMHO, it will be as it always been. Is there really a difference between making wars with bows & arrows or with fighter jets? People still die, human specie still remains. Is there really a difference between harvesting food with scythes or with tractors? People still eat, human specie still remains. I won't open the debate, but although some might argue that we're going downhill, others might argue that it made things better. All we know for sure, we're still here, alive and kicking.
 
  • Like
Likes patmurris
  • #56
Going back not so far there were people called luddites who thought that the idea of water-mills powering textile producing factories would lead to economic ruin and the disintegration of society.
 
  • #57
fluidistic said:
I don't understand the worry about the possibility that we are overtaken by some AI in a near future. Where by overtaken I mean litterally replaced. If "something" more intelligent than us eradicates us, isn't it "nice" overall? To me, this means we would have achieved our goal. It would mean we created something better adapted and more intelligent than us on Earth. For me this is a big win.
Maybe our specy won't die that quickly and we might become the AI's pets and be well treated.
I consider this our doom... maybe you would like to live as well fed experimental rat, i dont.
 
  • #58
Otherwise i don't believe, that some artificial super brain would lead to a singularity, infinite development.
If ten Einstein lived in Middle Ages, they could still come up only with Galilean Relativity.
 
  • #59
Averagesupernova said:
It has already been stated that when the machines start doing things above our level of comprehension we will shut it down since the assumption will be that the machines are malfunctioning.

And again, the belief that we can always do this is, for a lack of better word, naive. You assume that you will always be able to detect when something is going to have negative consequences before it widely deployed, that there can never be negative emergent behaviors. As I have tried to argue in this thread, there already exists technology where we do not have control to "shut it down" in the way you describe. Add powerful AI to make our system able to quickly self-adapt and we have a chaotic system with a "life of its own" where good and bad consequences are indiscernible, hence outside control.

As an engineer participating in the construction of this technology of tomorrow, I am already scratching my head, as you say, worrying that me and my peers should be more concerned about negative consequences for the global system as well as the local system each of us are currently building. I am aware that people without insight in how current technology works will not necessarily be aware of these concerns themselves as they (rightfully) expect the engineers to work hard fixing whatever problems that may appear, or they, as some here express, just accept that whatever happens happens. The need to concern yourself is different when you feel you have a responsibility to build a tomorrow that improves things without risk of major failures, even if others seem to ignore this risk.

Compare, if you like, with the concerns bioengineers have when developing gene-modified species or similar "products" that with the best intentions are ment to improve life, yet have the potential capability of ruin part of our ecosystem if done without care. I do not see a similar care in my field (yet), only the collective state of laissez-faire where concerns are dismissed as being silly with a hand wave.

Perhaps I am naive trying to express my concerns on this forum and in this thread, already tinted with a doomsday arguments at the very top to get people all fired up. I was hoping for a more technical discussion on what options we have, but I realize that such a discussion should have been started elsewhere. In that light I suggest we just leave it at that. I thank those who chipped into express their opinion and to try address my concerns with their own view.
 
  • #60
To lighten things up a bit, allow me to add what the AI themselves are saying about the end of the world (at 2m47) ...

 
  • Like
Likes Boing3000
  • #61
jack action said:
If you have to put a number on it, what is the risk of humanity ever getting itself lost to technological complexity and AI (i.e. "destruction of human civilization")? 90%, 50%, 10% or even 0.0001%?

Filip Larsen said:
Of the numbers quoted, I would right now say that 90% sounds most likely. Perhaps it easier for me to say it the other way around: I cannot see any reason why we will not continue down this road of increasing complexity. The question then remains if and how we in this thread can agree to a definition of "lost".

I think a key difference needs to be emphasised here: Complexity is very different from true AI. Complexity is something mankind has lived with ever since responsibilities for tasks were delegated to specific people. In dedicating ones time to learn one specialization, knowledge of others is sacrificed. As civilization progresses and overall knowledge increases it is clear that the percentage which can be known or understood by each individual must decrease.

Despite the impressive achievements of some 'AI' systems, for example learning Chess and Go to the level that they can beat the best human players, the scope of what they do is extremely narrow. Teams of developers have worked together to program in the basic rules and long term goals within the framework of those rules and then relied on essentially multiple random experiments to determine the best path to achieve the goals. The only 'intelligence' in this process is that of the development team. I feel that we are a very long way from seeing a system which displays anything resembling true understanding or intelligence.
 
  • Like
Likes GTOM
  • #62
Filip Larsen said:
As an engineer participating in the construction of this technology of tomorrow, I am already scratching my head, as you say, worrying that me and my peers should be more concerned about negative consequences for the global system as well as the local system each of us are currently building.

At the risk of repeating myself, the fact that you are saying this is reassuring. You are certainly not one in 7 billion to think that way.

Fear is good, it is when it turns to panic that everything goes bad. From my point of view, more bad things have come from panicked people than from the thing that was initially feared.

A question like: «Destined to build a super AI that will destroy us?» seems to be more on the panic side of things, that's why I prefer a tone-down discussion about it.

Filip Larsen said:
I was hoping for a more technical discussion on what options we have

I would like such thread and it would probably be more constructive.
 
  • #63
Filip Larsen said:
To establish a risk you commonly establish likelihood and significant cost. Base on what how things have developed last decade or two and combine it with the golden promise of AI, I find both high likelihood and severe costs. Below I have tried to list what I observe:
Thank you for the time spent to establish this list. But here, I'll put the emphasis on promises. None of what you said is irrational, except that speaking about fantasies is not science. It is science-fiction. I don't mean it as a derogatory term. I like doing it as well as another geek out there.
To cut the loop (we are indeed running in circle). I have already accepted (for the sake of the argument) that we know what an AI is and what it does. Let's call it "Gandalf", it does "magic" (any kind of magic, it is unspecified (and un-specifiable by definition)). But then we also have to agree that an AI is not something that want to kill you.

Filip Larsen said:
There is a large drive to increase complexity of our technology so we can solve our problem cheaper, faster and with more features.
That's incorrect. The complexity of thing is a great hindrance to their usage, because for user, complex equals complicate and un-reliable. When fridge will requires you to enter a password before opening, nobody will find that acceptable (especially when an error 404 will occur). Yet user will oblige, because someone have sell it by saying the mythical word "progress/revolution".
So no engineer in his right mind would want to increase complexity. And yet ... we do.
We do that because we want our solution to be more expensive, and less reliable. That's how actual business work, and that's the reason your phone cost one order of magnitude more than 20 years ago, and why its life span have shrunken to such ridiculously small number. I am not going to factor in the usage cost, nor the quality of the communication (even calling a friend 2 blocks away sometime sound like he/she is on another continent).
The thing we actually increase is profit (even that's not possible, scientific minded people know this quantity is zero on average). You cannot profit from something efficient and cheap ... by definition.
So I agree with you that there are "drive" to make things worse. More or less everybody on this thread agrees with that, except that half would recoil (<-understatement) at the idea of calling it "being worse", because we have literally been brainwashed in calling it "progress".
OK why not ? But then, even this kind of "progress" have limit.

There is a second segmentation between opinions on the matter: is it is good or bad ? (in a "ethical" sense). As if there was some kind of bible that Harris was refereeing to that can sort this out. There is none. Things happens, that the gist of it. I don't have to fear anything. Nobody has to. We can simply asserts situation, choosing some frame of reference (I fear that exclude "species" opinion, only individual have opinions), and have total chaos.
Nature will sort it out. It already does. Humanity is already doomed, except that "it" would probably survive (and changed), thus, what is the problem exactly ?

You are concerned that we may loose control. I totally agree because we loose control a while ago, maybe when "we" tame fire, or most probably when we invent sedentary (the occidental meme). But this statement of mine is informed by a particular subset of scientific observations.
I can as well play the devil advocate and change gear (frame of reference), and pretend we are doing fine and that we are being very wise an efficient (because really, being able to "text" while chasing Pokemon, while riding a bike, while listening to Justin Beiber, is efficient ... right )?

Filip Larsen said:
There is a large shift in acceptance of continuous change, both by the consumer and by "mangement". Change is used both to fix unintended consequences in our technology (consumer computers today requires continuous updates) and to improve functionality in a changing environment. The change-cycle is often seen sending ripples our through our interconnected technology making the need for even more fixes.
I agree but none of that is life threatening, or risky. It is business as usual. What would be life threatening for most people (because of this acceptance thing), is just stop doing it. Just try selling someone a car "for life". A cheap one, a reliable one. And observe the reaction...
Now, if you could do infinite changes in a sustainable way, is there still a problem ? That an AI would predict everything for you and condemn you into infinite boredom ? Don't you actually thing that a genuine singular/AI would understand that and leave us alone .. playing with our "toy" ?

Filip Larsen said:
All these observations are made with the AI technology we have up until now. If I combine the above observations with the golden promise of AI, I only see that we are driving even faster towards more complexity, faster changes, and more acceptance that our technology is working-as-intended.
Technology NEVER work as intended. From a knives to an atomic model, you'll always have people not using them "correctly".
AI don't exist, an Alexa (excellent video !) is a glorified Parrot (albeit much less smart)
I am not concerned by a program. Program don't exist in reality. They run inside memory. If some "decide" to shut down the grid (it probably happens all the time already). This is not a problem.
We could learn a lot about living of the grid, especially for our medical cares. This tendency is already going up.

Filip Larsen said:
Especially the ever increasing features-first-fix-later approach everyone seems to converge on appears to me, as a software engineer with knowledge about non-linear system and chaotic behavior, as a recipe for, well, chaos (i.e. that our systems are exhibiting chaotic behavior).
I have the very same feeling. Except I also know the more expensive a service is, the more dispensable it is. I am paid way too much to play "Russian Roulette" with user data. But none of that is harmful. The ones that are will be cleansed by evolution (as usual)

Filip Larsen said:
Without AI or similar technology we would simply at some point have to give up adding more complexity because it would be obvious to all that we were unable to control our systems, or we would at least have to apply change at a much slower pace allowing us to distill some of the complexity into simpler subsystems before continuing.
Not going to happens. We will continue to chase our tail by introducing layer of complexity above layer of complexity. That's how every business work. Computer science is no different, it may even be the most stubborn in indulging into that "nonsense".
AI would be a solution to be rid of "computer scientist", and that's one of the many reason it will never be allowed to come into existence.

Filip Larsen said:
So, from all this I then question how we are going to continuously manage the risk of such a interconnected system, and who is going to do it?
By relinquishing the illusion of control. By not listening to our guts.
Listen, in the US (as far as I know), it is not even possible to "manage the risk" of some category of tool (let's say of the gun'ny complexion).
I would say on my fear list, my PS4 is on the very last line. My cat is way above (I'll reconsider it once my PS4 will open the fridge an eat my life sustaining protein :wink:)

Filip Larsen said:
So, I guess that if you expect constant evaluation of the technology for nuclear reactors you will also expect this constant evaluation for other technology that has potential to harm you?
Don't panic comes to mind. As soon as I can, I'll get rid of nuclear reactors. Then maybe of Swiss-army knives (that are probably more lethal). Then cats ! Those little treacherous ba$tard !

Filip Larsen said:
What if you are in doubt about if some other new technology (say, a new material) is harmful or not? Would you rather chance it or would you rather be safe? Do you make a deliberate careful choice in this or do you choose by "gut feeling"?
I do as you do. I evaluate and push one way on another. Individually. I'll establish my priority. And I'll start by denouncing any fear mongering professional like Harris, that occupy a stage he has no right to be using (by debunking his arguments).
There is plenty of harmfully technology, none of them are virtual/electronic. Geneticists working for horrible people with horrible intention (yes Monsanto I am looking at you) are building things so dangerous (and that we can qualify easily as singularity compliant) that even Los Alamo's will passes a sympathetic pick-nick.
People like me, are building program that serves as "arm" for banks and finance. They destroy life for the bottom line.
None of those program are intelligent, none of them is indispensable. Stop using them will cost us nothing (It'll cost me my job, but I'll manage)

Filip Larsen said:
This is what I hear you say: "Intelligence is just based on atoms, and if atoms are stupid then more atoms will just be more stupid."
That's not what I meant. Another people here have said that "surviving" is a proof of intelligence. By that account virus are the smarter. They'll outlive us all.
I meant there is no correlation between quantity and quality. You and I are also are aware then "more chip per dice" is not synonym of more speed/power.
There are thousandth of solutions to occupy niches, and none of them is better than the other.

Filip Larsen said:
Flops is just a measure of computing speed. What I express is that the total capacity for computing speed is rising rapidly, both in the supercomputer segment (mostly driven by research needs) and in the cloud segment (mostly driven by commercial needs).
I know that as a civilization we are addicted to speed. But those numbers are totally misleading, the reality about speed is here.
Computing speed as topped at 3GHz ten years ago. Drive speed also, even if the advent of SD have boost thing a little.
Nothing growth forever. Nothing ever grew more then a few years in a row. That's math.

Filip Larsen said:
It is not unreasonably to expect that some level of general intelligence (ability to adapt and solve new problems) requires a certain amount of calculations (very likely not a linear relationship). And for "real-time intelligence" this level of computation corresponds to a level of computing speed.
I accept this premise (also I may try to convince you otherwise on another thread).
What I (and many other people on this thread) don't accept as a premise, it that it is a risk. In fact it would be the first time in human history that we invent something intelligent. Why on Earth should I be worried ?
What is false is that there is "ever growing". What is doubly false is that computer will "upgrade themselves". Harris don't know that. An this is baseless.

Filip Larsen said:
Also, more use of dedicated renewable energy sources will allow datacenters to increase their electrical consumption with less dependency on a regulated energy sector. In all, there is no indication that the global computing capacity will not continue to increase significantly in the near future.
Actually energy will soon became of national interest, and we will first dispense of all these terawatt wastfull cat-center.
All the alarm's bell are ringing, and all the lights are blinking red, that's more or less game over already.
Intelligence is not risky. Continuing to believe into the mythological increase meme is.
 
  • Like
Likes jack action
  • #65
Concrete Problems in AI Safety (https://arxiv.org/abs/1606.06565) is an interesting read, and illustrates well how numerous and non-obvious the issues of employing AI are even when only considering problems of current practical research. Anyone having trouble imagining what possibly could go wrong even with the "entry-level" AI of today might might be enlightened by a read-through.

Abstract:
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.

The conclusion is also very relevant, I think:
With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems. The risk of larger accidents is more difficult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc fixes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a unified approach to prevent these systems from causing unintended harm.
 
  • Like
Likes Charles Kottler
  • #66
Filip Larsen said:
Concrete Problems in AI Safety (https://arxiv.org/abs/1606.06565) is an interesting read, and illustrates well how numerous and non-obvious the issues of employing AI are even when only considering problems of current practical research.
This is a good read, but I failed to see the novelty. I read about those concerns years ago, and those are no more in the "research" category. There are plenty of bots that do learn and do affect your life. App/Bot on your GSM that react to trafic JAM do have beyond human sensory abilities (real time global sensors), and have beyond human memory (collectively stored in the cloud). Those bot do mistake all the time, and they do learn that there is no always one optimal path, that dominating the road is not an issue, and actual collaboration giver better results that sending everybody on the same jam "shortcut".

Filip Larsen said:
Anyone having trouble imagining what possibly could go wrong even with the "entry-level" AI of today might might be enlightened by a read-through.
This paper does not help. Glorified vacuum cleaners aren't risky. There must be somewhere statistics on domestically accident. I'll bet that death by old Classic electric powered one is already a thing. I'll also bet that a good old brush as even worst statistics.
I am quite confident that a smart vacuum cleaner will be less harmful, unless you consider that loosing the knowledge to clean by yourself is itself harmful (I repeat only a precise frame of reference allow you to evaluate risk, and there is many).
But a vacuum cleaner deciding "by himself" to suck and vaporize all you paperwork at work is surely a risk ? Yes, it is, a lesser risk that putting all your data in a "dumb" cloud, but still, a risk. But then, I'll remind you the subject of this thread: "an AI that will destroy us". Do you see the discrepancy ?

We all agree that "new things", that is, "human inventions", have a down size. Mutating virus too. Intelligence threatening ? No way. Stupidity is threatening.
One of the reference [27] is more then dubious. This is not a science publication. It is a "block buster" best seller whose reception is quite telling. And Harris push the same fear wagon, or recycle those hypothesis/fiction without even bothering justifying them. https://www.amazon.com/dp/1501227742/?tag=pfamazon01-20 is just factually wrong.
premise said:
As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence
Gorillas do not depends on humans no more then humans depends on virus, fossil fuel, nuclear bomb, or cats. Nuisance is not correlated to intelligence.

A more scientific reference ([167]) is quite interesting. I'll quote two small passage.
MACHINE INTELLIGENCE RESEARCH INSTITUTE said:
Introduction, first sentence: "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it"
In the conclusion: "Nature is, not cruel, but indifferent; a neutrality which often seems indistinguishable from outright hostility."

But I can actually play doomsday as well as anyone here. We are well pas beyond the point were those risk are taken with all our mighty tools. Let's focus on AI (un)likely ones
-Machines do autonomously high frequency trading at ridiculously enormous amount of volume per millisecond. They could theoretically vaporize billionth of monetary signs. So what ? Nobody is actually harmed, and the real things are still there in the stock and works as usual.
-Machines decide to virtually nuke every electronic knowledge (beside them or including them). They have cunningly wait that any paper knowledge version is long burned or recycled into IPhone3.14159... Format /Internet /ALL
So what ? Everybody is harmed (beside Gorillas) ... or are we ? Didn't the machine realize the human race has became totally enslaved to them ? Isn't a good slap in the face unavoidable at some point or another ?

But still some people are thinking that tools that has been build to nuke the entire planet are less dangerous that tools build to win jeopardy ?
 
Last edited by a moderator:
  • #67
which side should I join in? the AI defenders or attackers?
*let me build an AI software to tell me the best choice*
But on the point, I am never listening to people who cry out of doomsday and destruction... Everything that is built by humans is to serve humans. If things indeed get complicated it doesn't mean that a human can't handle them...
afterall, machines don't have emotions... and so there is no sense of ambition in them... their goal is pre-set by the humans, and the machine is just learning the optimal way to reach them (something that would take maybe a lot of time, effort and thinking for a human being).
 
  • #68
Well to bring it all home, I have heard very smart computer scientists (both those working directly in machine learning applications and outside of that domain) argue heatedly both ways--that a "singularity" is ridiculous and that AI will never measure up to something capable of bringing a "singularity" about (or anything more extraordinary than the pace of innovation we currently enjoy) and vice versa, that an AI could do those things. The problem is, no one really knows exactly how intelligence or imagination works, so we're all just essentially speculating on what the capabilities of an AI would truly be (though watching some of DeepMind's work with those Atari video games was very interesting and somewhat goosebump-inducing as the AI quickly reached "superhuman" levels of performance).

The horizon of possible intelligence or imagination in a thinking agent is unknown, so maybe an AI could only ever do things on a much expanded time-scale at human level intelligence (were a general AI ever developed--which I don't see why that in itself would be ultimately impossible), or maybe it would somehow--and at some point--expand incomprehensibly and go beyond the current horizon of human intelligence/imagination (like Harris seems to believe is possible) and do things like derive some of the deep laws of physics with simple observations of its environment and go from there. Again, it's all just speculation at this early time.

Still, I think it's a problem worth thinking about--governments and companies working towards the achievement of a fully autonomous, generalized intelligent agent should definitely not be irresponsible and leave things to chance just because [insert argument that introduction and operation of AGI won't potentially dramatically harm human race in some way for reason X]. Like with climate change, even if the Earth climate is generally robust, I think it would be better for me to hedge my bets and try to do as little harm as possible with the data climate scientists have been delivering--sure, I could disbelieve them all I want (like a lot of people weirdly do), but if there is even a sliver of truth to the potential for what adverse climate change may cause for the human race, then it would be irresponsible (and suicidal in the dumbest way, or maybe just homicidal (with lack of ill intent) for my grandchildren) of me to act in a way that further damages the climate--not that I could personally damage or help the climate in any way, but say I developed a gas that accelerated the accumulation of greenhouse gasses in a significant way and found a commercial application for this gas--knowing what I know, I as the developer would be at fault in that circumstance if I continued trying to push that gas commercially--any benefit from the gas would have to outweigh potential extinction or irreversible impact on climate.

If there are enough smart experts (forget Sam Harris) in the field of AI who I know to be rational and responsible that have serious concern for the development of AI for various reasons (some great reasons were already mentioned earlier in the thread, particularly by @Filip[/PLAIN] Larsen), then even if there is a group of equally smart experts that disagree, it would be in my best interest to hedge my bets and take the possibility of harm seriously. That's not to say that I won't try to discern the answer for myself, but I think it's best to be careful even if I feel otherwise confident that things will be fine.

In this case, Sam Harris's explanation for why people should be worried about AI seems fairly straightforward--if you accept strong AI as something that is possible within the laws of physics (and you should because humans exist), and that humans will at some point create one (which of course may not happen), and that it will be unhampered by the limitations of biology like slow thinking, poor fidelity of memory and a lack of an ability to become a high-level expert in multiple hard scientific fields and sub-fields, then yes, in my opinion that seems like something to take seriously and approach with all due caution (regarding things like control and value alignment, it's something that I agree should be worked out to some extent before the so called AGI is "turned on").

To reiterate though, most of the talk on such fantastical things is quite speculative, and that's why arguments like those in this thread always end up taking place--no one objectively knows what will happen, but everyone thinks they do.

 
Last edited by a moderator:
  • #69
AaronK said:
but say I developed a gas that accelerated the accumulation of greenhouse gasses in a significant way and found a commercial application for this gas--knowing what I know, I as the developer would be at fault in that circumstance if I continued trying to push that gas commercially--any benefit from the gas would have to outweigh potential extinction or irreversible impact on climate.

But that is not the problem at stake, quite the opposite.

What if you thought of a way for creating a gas that could have many commercial applications, but you're still not sure how to do it. You also have no clue if it would have any impact on the accumulation of greenhouse gases, but some say it might. Again, the gas doesn't exist, so nobody knows.

Would you prefer not taking any chances and stop the research on how to produce that gas? Or would you do the research and see where it goes?
 
  • #70
This whole discussion reminds me of an old theological piece of sophistry: "Can God create an object so heavy that He cannot lift it?".
 
Last edited:

Similar threads

  • General Discussion
Replies
1
Views
206
  • Programming and Computer Science
2
Replies
39
Views
4K
Replies
8
Views
656
Replies
10
Views
2K
  • Computing and Technology
Replies
11
Views
630
  • Computing and Technology
16
Replies
559
Views
21K
Replies
19
Views
2K
Replies
24
Views
2K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
433
Replies
1
Views
916
Back
Top