Filip Larsen
Gold Member
- 2,003
- 947
There are a lot of different possible scenarios that lead to conditions that most would agree are to be avoided at all costs and all of those scenarios depend on loss of control over some period of time, but where, contrary to what several on this thread seems to argue for, there are no clear point along the path to the bad scenario where we actually choose to stop. For instance, if two armed superpowers compete in an AI race and both get to ASI level, then it is almost given that they will be forced to apply ASI to their military capabilties in order to not loose out. The argument is that at any given point there is a large probability that those in control will always want to continue because for them the path still go towards a benefit ("not loosing the war") and the the worst-case scenario (where no human really are in control) is still believe to be theorical or preventable furter down the path. Note that the human decision mechanisms in such scenarios are (as far as I see it) almost identical to the mechanisms that lead to the nuclear arms race, so we can take it as a historical fact that human in all likelyhood are prone to choose "insane" paths when conditions are set right (this is meant to address the counter-arguments that "clearly no one will be so insane as to give AI military control so therefore there can be no AI military doomsday scenario"). But this is just one type of scenario.
As could be expected from previous discussions, this thread seem goes in a lot of different directions and often gets hung up on some very small details that are difficult to see if is relevant or not, or stick to a very specific scenario while ignoring others. In regards to scenarios with severe bad outcome for the majority of humans, they all (as far as I am aware) hinge on 1) the emergence of scalable ASI and 2) the gradual voluntary loss of control by the majority of human because ASI simply does everything better. Now 1) may prove to be impossible for some yet unknown reason, but right now we are not aware of any reason why ASI should not be possible at some point in the future and given the current reasearch effort we cannot expect ASI reasearch to stop by itself (the benefits are simply too luring for us humans). That leaves 2), the loss of control, or more accurately, loss of power of the people.
So to avoid anything bad we "just" have to ensure people remains in power. On paper, a simple and sane way to avoid most of the severe scenarios is to do what we already know works fairly well in human world affairs, namely to ensure the majority of humans remains truthful informed and in enough control so they well in advance can move towards blocking out paths towards bad scenarios. In practice this may prove more difficult with ASI because of how hard it is to well in advance discern paths towards beneficial scenarios from bad ones. And on top of that, addressing my main current concern, we also have some of the select few in current political and technological power that are actively working towards eroding the level of power the people have over AI, with the risk that over time the majority will not be able to form any coherent consensus and even if they do they may not have any real options for coordinated control or even opting out for themselves (relevant for scenarios where the majority of humans at that point are on universal income and all production is dirt cheap because of ASI).
And to steer a bit towards the thread topic of AI hype, maybe we all here can agree that constructive discussions of both benefits and potential risks of AI are suffering from the high level of hype, of which much hinging on the possibility of ASI. It may thus add to constructive discussion if we separate those cases. For instance, if the invention of ASI is a precondition for a specific scenario (like it is for most of the worst-case scenarios) then arguing against the existence of ASI when discussing such scenarios is not very helpful for anyone. I personally find discussions about whether or not ASI can exist interesting and extremely relevant, but its a bit separate discussions from the potiental consequences of ASI.
As could be expected from previous discussions, this thread seem goes in a lot of different directions and often gets hung up on some very small details that are difficult to see if is relevant or not, or stick to a very specific scenario while ignoring others. In regards to scenarios with severe bad outcome for the majority of humans, they all (as far as I am aware) hinge on 1) the emergence of scalable ASI and 2) the gradual voluntary loss of control by the majority of human because ASI simply does everything better. Now 1) may prove to be impossible for some yet unknown reason, but right now we are not aware of any reason why ASI should not be possible at some point in the future and given the current reasearch effort we cannot expect ASI reasearch to stop by itself (the benefits are simply too luring for us humans). That leaves 2), the loss of control, or more accurately, loss of power of the people.
So to avoid anything bad we "just" have to ensure people remains in power. On paper, a simple and sane way to avoid most of the severe scenarios is to do what we already know works fairly well in human world affairs, namely to ensure the majority of humans remains truthful informed and in enough control so they well in advance can move towards blocking out paths towards bad scenarios. In practice this may prove more difficult with ASI because of how hard it is to well in advance discern paths towards beneficial scenarios from bad ones. And on top of that, addressing my main current concern, we also have some of the select few in current political and technological power that are actively working towards eroding the level of power the people have over AI, with the risk that over time the majority will not be able to form any coherent consensus and even if they do they may not have any real options for coordinated control or even opting out for themselves (relevant for scenarios where the majority of humans at that point are on universal income and all production is dirt cheap because of ASI).
And to steer a bit towards the thread topic of AI hype, maybe we all here can agree that constructive discussions of both benefits and potential risks of AI are suffering from the high level of hype, of which much hinging on the possibility of ASI. It may thus add to constructive discussion if we separate those cases. For instance, if the invention of ASI is a precondition for a specific scenario (like it is for most of the worst-case scenarios) then arguing against the existence of ASI when discussing such scenarios is not very helpful for anyone. I personally find discussions about whether or not ASI can exist interesting and extremely relevant, but its a bit separate discussions from the potiental consequences of ASI.