Filip Larsen
Gold Member
- 2,030
- 974
If we were really talking about asteroids or other background/existing high risk scenarios were we do not start out being in control, yes, but here we are talking about AI technology where we humans pace new technology forward presumable with the expectation that we should remain in control over any introduced risks. And in this regards my question (fuel by current world trends) is how do we stay in control if one of the risk-increasing factors seems to be loss of effective risk control, i.e. the issue that at all times along the path towards some of the worst-case scenarios those with the actual power to mitigate risks are never going to mitigate this particular risk for reason that seems acceptable to them at that point. I accept that there are people here who, for reasons I probably never fully get, find such questions silly, irrelevant, or outside what they feel they can constructively participate in, which is all fine, but the question still stands and is to me is as relevant as ever.russ_watters said:It's very much like an earth-sized asteroid in that way. You're of course free to live in fear of it, but it's pointless to try to defend against it.
I have no illusion that a discussions on PF is going to change the world, but I still have the naive hope we can have a constructive discussion about it. The reason for this, I think, is that I, for one, would really like to hear a good technical argument for why I don't have to worry about the worst-case scenarios, but so far all I have heard the usual risk brush-off arguments along the line "the scenarios are all wrong and will never happen" or "its too complex to think about, anything can happen, so ignore it until we have clear and present danger". If people have are aware of a scientific/technical reason for why a class of scenarios or even a specific scenario will not happen or why the consequences are guarnateed be much less severe then I would love to hear it.
