russ_watters
Mentor
- 23,734
- 11,177
Yeah, I think this framing is pointlessly broad. Nobody would argue that a new product or potentially dangerous feature of an existing product doesn't potentially need new regulations. They frequently do. I'd call that "more of the same". But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates. If we ever get there, that might reduce their legal disclaimer to a single line: "This device is sentient and as such the manufacturer holds no liability for its actions." That would be very Different.Jarvis323 said:I guess we need to define differently. I argued before that different things are already regulated differently, and AI will bring new things with unique risks. It will also span every sector and include products, services, and more. I doubt any of us have the knowledge to be able to speak broadly about all forms of regulation that exist (you would probably not be able to read all of those documents in your lifetime), or all possible applications of AI and their risks.
In the end, I think we are having a sort philosophical debate like the one in QM. My view is, whatever your interpretation, in the end we need to do the math and get the results, and AI presents us with a lot of new math.