Filip Larsen said:
To establish a risk you commonly establish likelihood and significant cost. Base on what how things have developed last decade or two and combine it with the golden promise of AI, I find both high likelihood and severe costs. Below I have tried to list what I observe:
Thank you for the time spent to establish this list. But here, I'll put the emphasis on
promises. None of what you said is irrational, except that speaking about fantasies is not science. It is science-fiction. I don't mean it as a derogatory term. I like doing it as well as another geek out there.
To cut the loop (we are indeed running in circle). I have already accepted (for the sake of the argument) that we know what an AI is and what it does. Let's call it "Gandalf", it does "magic" (any kind of magic, it is unspecified (and un-specifiable by definition)). But then we also have to agree that an AI is not something that want to kill you.
Filip Larsen said:
There is a large drive to increase complexity of our technology so we can solve our problem cheaper, faster and with more features.
That's incorrect. The complexity of thing is a great hindrance to their usage, because for user, complex equals complicate and un-reliable. When fridge will requires you to enter a password before opening, nobody will find that acceptable (especially when an error 404 will occur). Yet user will oblige, because someone have sell it by saying the mythical word "progress/revolution".
So no engineer in his right mind would want to increase complexity. And yet ... we do.
We do that because we want our solution to be
more expensive, and l
ess reliable. That's how actual business work, and that's the reason your phone cost one order of magnitude more than 20 years ago, and why its life span have shrunken to such ridiculously small number. I am not going to factor in the usage cost, nor the quality of the communication (even calling a friend 2 blocks away sometime sound like he/she is on another continent).
The thing we actually increase is profit (even that's not possible, scientific minded people know this quantity is
zero on average). You cannot profit from something efficient and cheap ... by definition.
So I agree with you that there are "drive" to make things worse. More or less everybody on this thread agrees with that, except that half would recoil (<-understatement) at the idea of calling it "being worse", because we have literally been brainwashed in calling it "progress".
OK why not ? But then, even this kind of "progress" have limit.
There is a second segmentation between opinions on the matter: is it is good or bad ? (in a "ethical" sense). As if there was some kind of bible that Harris was refereeing to that can sort this out. There is none. Things happens, that the gist of it. I don't have to fear anything. Nobody has to. We can simply asserts situation, choosing some frame of reference (I fear that exclude "species" opinion, only individual have opinions), and have total chaos.
Nature will sort it out. It already does. Humanity is already doomed, except that "it" would probably survive (and changed), thus, what is the problem exactly ?
You are concerned that we may loose control. I totally agree because we loose control a while ago, maybe when "we" tame fire, or most probably when we invent sedentary (the occidental meme). But this statement of mine is informed by a particular subset of scientific observations.
I can as well play the devil advocate and change gear (frame of reference), and pretend we are doing fine and that we are being very wise an efficient (because really, being able to "text" while chasing Pokemon, while riding a bike, while listening to Justin Beiber, is efficient ... right )?
Filip Larsen said:
There is a large shift in acceptance of continuous change, both by the consumer and by "mangement". Change is used both to fix unintended consequences in our technology (consumer computers today requires continuous updates) and to improve functionality in a changing environment. The change-cycle is often seen sending ripples our through our interconnected technology making the need for even more fixes.
I agree but none of that is life threatening, or risky. It is business as usual. What would be life threatening for most people (because of this acceptance thing), is just stop doing it. Just try selling someone a car "for life". A cheap one, a reliable one. And observe the reaction...
Now, if you could do infinite changes in a sustainable way, is there still a problem ? That an AI would predict everything for you and condemn you into infinite boredom ? Don't you actually thing that a genuine singular/AI would understand that and leave us alone .. playing with our "toy" ?
Filip Larsen said:
All these observations are made with the AI technology we have up until now. If I combine the above observations with the golden promise of AI, I only see that we are driving even faster towards more complexity, faster changes, and more acceptance that our technology is working-as-intended.
Technology
NEVER work as intended. From a knives to an atomic model, you'll always have people not using them "correctly".
AI don't exist, an Alexa (excellent video !) is a glorified Parrot (albeit much less smart)
I am not concerned by a program. Program don't exist in reality. They run inside memory. If some "decide" to shut down the grid (it probably happens all the time already). This is not a problem.
We could learn a lot about living of the grid, especially for our medical cares. This tendency is already going up.
Filip Larsen said:
Especially the ever increasing features-first-fix-later approach everyone seems to converge on appears to me, as a software engineer with knowledge about non-linear system and chaotic behavior, as a recipe for, well, chaos (i.e. that our systems are exhibiting chaotic behavior).
I have the very same feeling. Except I also know the more expensive a service is, the more dispensable it is. I am paid way too much to play "Russian Roulette" with user data. But none of that is harmful. The ones that are will be cleansed by evolution (as usual)
Filip Larsen said:
Without AI or similar technology we would simply at some point have to give up adding more complexity because it would be obvious to all that we were unable to control our systems, or we would at least have to apply change at a much slower pace allowing us to distill some of the complexity into simpler subsystems before continuing.
Not going to happens. We will continue
to chase our tail by introducing layer of complexity above layer of complexity. That's how every business work. Computer science is no different, it may even be the most stubborn in indulging into that "nonsense".
AI would be a solution to be rid of "computer scientist", and that's one of the many reason it will never be allowed to come into existence.
Filip Larsen said:
So, from all this I then question how we are going to continuously manage the risk of such a interconnected system, and who is going to do it?
By relinquishing the illusion of control. By not listening to our guts.
Listen, in the US (as far as I know), it is not even possible to "manage the risk" of some category of tool (let's say of the gun'ny complexion).
I would say on my fear list, my PS4 is on the very last line. My cat is way above (I'll reconsider it once my PS4 will open the fridge an eat my life sustaining protein

)
Filip Larsen said:
So, I guess that if you expect constant evaluation of the technology for nuclear reactors you will also expect this constant evaluation for other technology that has potential to harm you?
Don't panic comes to mind. As soon as I can, I'll get rid of nuclear reactors. Then maybe of Swiss-army knives (that are probably more lethal). Then cats ! Those little treacherous ba$tard !
Filip Larsen said:
What if you are in doubt about if some other new technology (say, a new material) is harmful or not? Would you rather chance it or would you rather be safe? Do you make a deliberate careful choice in this or do you choose by "gut feeling"?
I do as you do. I evaluate and push one way on another. Individually. I'll establish my priority. And I'll start by denouncing any fear mongering professional like Harris, that occupy a stage he has no right to be using (by debunking his arguments).
There is plenty of harmfully technology, none of them are virtual/electronic. Geneticists working for horrible people with horrible intention (yes Monsanto I am looking at you) are building things so dangerous (and that we can qualify easily as singularity compliant) that even Los Alamo's will passes a sympathetic pick-nick.
People like me, are building program that serves as "arm" for banks and finance. They destroy life for the bottom line.
None of those program are intelligent, none of them is indispensable. Stop using them will cost us nothing (It'll cost me my job, but I'll manage)
Filip Larsen said:
This is what I hear you say: "Intelligence is just based on atoms, and if atoms are stupid then more atoms will just be more stupid."
That's not what I meant. Another people here have said that "surviving" is a proof of intelligence. By that account virus are the smarter. They'll outlive us all.
I meant there is no correlation between quantity and quality. You and I are also are aware then "more chip per dice" is not synonym of more speed/power.
There are thousandth of solutions to occupy niches, and none of them is better than the other.
Filip Larsen said:
Flops is just a measure of computing speed. What I express is that the total capacity for computing speed is rising rapidly, both in the supercomputer segment (mostly driven by research needs) and in the cloud segment (mostly driven by commercial needs).
I know that as a civilization we are addicted to speed. But
those numbers are totally misleading, the reality about speed
is here.
Computing speed as topped at 3GHz ten years ago. Drive speed also, even if the advent of SD have boost thing a little.
Nothing growth forever. Nothing ever grew more then a few years in a row. That's math.
Filip Larsen said:
It is not unreasonably to expect that some level of general intelligence (ability to adapt and solve new problems) requires a certain amount of calculations (very likely not a linear relationship). And for "real-time intelligence" this level of computation corresponds to a level of computing speed.
I accept this
premise (also I may try to convince you otherwise on another thread).
What I (and many other people on this thread) don't accept as a premise, it that it is a risk. In fact it would be the first time in human history that we invent something intelligent. Why on Earth should I be worried ?
What
is false is that there is "ever growing". What is doubly false is that computer will "upgrade themselves". Harris don't know that. An this is baseless.
Filip Larsen said:
Also, more use of dedicated renewable energy sources will allow datacenters to increase their electrical consumption with less dependency on a regulated energy sector. In all, there is no indication that the global computing capacity will not continue to increase significantly in the near future.
Actually energy will soon became of national interest, and we will first dispense of all these
terawatt wastfull cat-center.
All the alarm's bell are ringing, and all the lights are blinking red, that's more or less game over already.
Intelligence is not risky. Continuing to believe into the mythological increase meme is.