- 11,326
- 8,751
Every time I dip my toes in a PF General Discussions thread, I wind up regretting it.
The bolded part is the flaw - the self-contradiction - in the pessimistic view; the "new machinery" is the new industry.But here’s where the Monthly Labor Review(MLR) can provide some needed historical perspective. In the 50th anniversary issue in 1965, Lloyd Ulman included this quote from a former Secretary of Labor: “In the long run, new types of industries have always absorbed the workers displaced by machinery, but of late, we have been developing new machinery at a faster rate than we have been developing new industries…At the same time we must ask ourselves, is automatic machinery, driven by limitless power, going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress?” This was from the Secretary of Labor in 1927, "Puddler” Jim J. Davis.
In short, there has long been a national worry about technology-driven unemployment and that fear seems to spike either in recessions or in periods of particularly robust innovation. But to date that fear has always been misplaced. And despite the claims of current techno-distopians, these claims will be wrong going forward. We can rest easy that in 25 years the unemployment and labor force participation rates will be similar to today’s.[emphasis mine]
russ_watters said:The only thing actually being proposed is fear.
anorlunda said:Sorry, being an expert who creates software has no relation to being qualified to predict social impacts.
No, it's related to predictions of what AI will be able to do and how much importance it will take in people lives, even though all of that is speculation at this point. The fear comes from the fact that it seems only negative outcomes (so far as destruction of the civilized world as we know it) are possible.gleem said:Referring to the Asilomar Principles?russ_watters said:The only thing actually being proposed is fear.
Although an important problem, I don't think this is related to AI specifically. You can create a simple model with algorithmic bias and solved it with a pencil and a piece of paper. It is more linked to people's ignorance about the limits of mathematical models.gleem said:Even now AI related problems are being identified with significant social/legal implications. Algorithmic Bias.
https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
jack action said:The problem is that we have so little knowledge about this AI future, that no one can either prove or disprove that these negative outcomes can actually happen.
jack action said:Although an important problem, I don't think this is related to AI specifically.
No, those are just generic guiding principles. They have little relevance to the subject of the thread, which is doom and gloom predictions and legislation to prevent them. I took engineering ethics in college - that isn't what this is about.gleem said:Referring to the Asilomar Principles?
No, why should I do that?Have you read and evaluated the list of the signers of the Asilomar Principles and in particular the researcher?
1oldman2 said:"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter
gleem said:And what about those cleaning robots? and all without negative sociological or economic affect.(I think)