Filip Larsen said:
1) I have already enumerated several of those risks in this thread, like humanity volunteering most control of their life to "magic" technology.
But you still haven't provided us with any clues as to why that is
a risk. As far as normal people are concerned (those not having an intimate relationship with Maxwell equations, or Quatum field theory (that is 99.99999% of humanity, including me) a simple telephone is "magic". A cell phone even more, there is not even a cable !
If (that is a
big "if", not supported by science in any way whatsoever) a supper AI pops into existence and as far as we are concerned we call it Gandalf, because it does "magic", what is the risk ? Please explain. What is good, what is not. Who dies, who does not.
Filip Larsen said:
as As long as these risks are not ruled out as a "high cost" risk I do not really have to enumerate more to illustrate that there are "high cost" risks.
But there is no risk. I mean not because there AI don't exist, nor because progress is not a exponential quantity. The reason there is no risk is because you have NOT explains any plausible risk.
"Politics" is nowadays what we "surrender" most of our decision making. Is it good, is it bad ? What "risk" is there ? What do be gain, what do we loose ?
All these have been explored is so many different way in so many fantasy book (Asimov comes to mind). None of it is science. That does not mean that is not interesting. The more "intelligent" of those novel are not black and white.
Filip Larsen said:
But please feel free to show how you would make risk mitigation or reduction of each of them because I am not able to find ways to eliminate those risks.
I am not sure what "risk mitigation" means. But as a computer "scientist", I know that computers aren't there to harm us, most often, it is the other way around (we give them bugs and viruses, and force them to do stupid things, like playing chess, or display cats in high definition)
Filip Larsen said:
2) Requiring everyone else to prove that your new technology is dangerous instead of requiring you to prove its safe is no longer a valid strategy for a company.
I cannot even begin to follow you. Am I forced to buy some of your insurance and build some underground bunker, because someone on the internet is claiming that doom is coming ?. I don't meant mean real doom, like climate change, but some AI gone berserk ? Are you kidding me ?
Filip Larsen said:
Using my example from earlier, you can compare my concern with the concern you would have for your safety if someone planned to build a fusion reactor very near your home, yet they claim that you have to prove that their design will be dangerous.
That's a non sequitur. A fusion reactor may kill me, we know very precisely how, with some kind of real probability attached.
I then mitigate it with some other benefit I get from it. That's what I call intelligence: balance, and constant evaluation.
Filip Larsen said:
Energy consumption so far seem to set a limit on how a localized an AI with human-sized intelligence can be due to the current estimate of how many PFlops it would take on conventional computers.
First Flops are not intelligence. If stupid programs run on computer, then more Flops will lead to more stupidity.
Secondly, no Flops nor computer design are an ever increasing quantity. We are still recycling 70'th tech, because it is just still about move store and add, sorry.
Filip Larsen said:
You can simple calculate how many computers it would take and how much power, and conclude that any exponential growth in intelligence would hit the ceiling very fast. However, two observations seems to indicate that this is only a "soft" limit and that the ceiling may in fact be much higher.
Indeed. But again those limit are not soft at all, as far as Plank is concerned. And again, quantity and quality are not the same thing.
Filip Larsen said:
Firstly, there already exist "technology" that are much more energy efficient. The human brain only uses around 50W to do what it does, and there is no indication that there should be any problem getting to that level of efficiency in an artificial neural computer either. The
IBM's True North chip is already a step down that road.
But that's MY point ! A very good article. Actually geneticist are much more close to build a super brain that IBM. So what ? What are the risk, and where is the exponential "singularity" ? Are you saying that such a brain will want to be bigger and bigger until it as absorbed every atom in earth, then the universe ?
I am sorry, but I would like to know on what scientific basis this prediction is based on. The only thing that does that (by accident, because any program CAN go berserk) are called cancer. They kill their host. We are not hosting computers. Computer are hosting program.
Filip Larsen said:
Secondly, there is plenty of room to scale out in. Currently our computing infrastructure is increasing at an incredible speed making processing of ever increasing data sets cheaper and faster, making access to EFlops and beyond on the near horizon.
That's just false. Power consumption of data center are already an issue. And intelligent wise, those data center have the IQ below that of a snail.
You can also add up 3 billion of average "analogic" people like me, it would still not make us anywhere close to Einstein intelligence.
Filip Larsen said:
If you don't believe you can add any more meaningful information or thoughts, then sure. But I still like to discuss technical arguments would those that still care for this issue.
Of but I agree, the problem with arguments is that I would like them to be rooted in science, not in fantasy (not that you do, but Sam Harris
does, and this thread is a perfect place to debunk them)
We seem to agree that computing power (that is not correlated with intelligence
at all) is limited by physics.
That is a start. No singularity anywhere soon.