Regulating AI: What's the Definition?

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The UK government has proposed new regulations for artificial intelligence that require AI systems to identify a legal person responsible for any issues that arise. This "pro-innovation" framework will be managed by existing regulators rather than a central authority, contrasting with EU plans. A significant point of discussion is the need for a clear definition of AI to effectively enforce these regulations, with some advocating for a broad interpretation. Concerns also arise regarding the transparency and explainability of AI systems, as understanding their decision-making processes can be complex. Ultimately, the regulation aims to ensure accountability and safety in AI deployment while navigating the challenges of legal liability.
  • #91
Jarvis323 said:
I guess we need to define differently. I argued before that different things are already regulated differently, and AI will bring new things with unique risks. It will also span every sector and include products, services, and more. I doubt any of us have the knowledge to be able to speak broadly about all forms of regulation that exist (you would probably not be able to read all of those documents in your lifetime), or all possible applications of AI and their risks.

In the end, I think we are having a sort philosophical debate like the one in QM. My view is, whatever your interpretation, in the end we need to do the math and get the results, and AI presents us with a lot of new math.
Yeah, I think this framing is pointlessly broad. Nobody would argue that a new product or potentially dangerous feature of an existing product doesn't potentially need new regulations. They frequently do. I'd call that "more of the same". But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates. If we ever get there, that might reduce their legal disclaimer to a single line: "This device is sentient and as such the manufacturer holds no liability for its actions." That would be very Different.
 
Computer science news on Phys.org
  • #92
russ_watters said:
Yeah, I think this framing is pointlessly broad. Nobody would argue that a new product or potentially dangerous feature of an existing product doesn't potentially need new regulations. They frequently do. I'd call that "more of the same". But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates. If we ever get there, that might reduce their legal disclaimer to a single line: "This device is sentient and as such the manufacturer holds no liability for its actions." That would be very Different.
Breeders/manufacturers aren't liable for dogs biting?
 
  • #93
Bystander said:
Breeders/manufacturers aren't liable for dogs biting?
Well, no, but I don't see the relevance here. Dogs aren't sentient in the sense I think we're discussing.
 
  • #94
russ_watters said:
But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates.
https://www.channelnewsasia.com/commentary/ai-legal-liability-boeing-tesla-uber-car-crash-2828911
I'm not familiar with this site but the story seems to follow along the lines you mention. Thought I'd throw it on the pile.

"Data gathered from 16 crashes raised concerns over the possibility that Tesla’s artificial intelligence (AI) may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact."

 
  • #95
So a car can point itself at a bunch of kids, accelerate to a zillion miles an hour in Crazy Mode (or whatever it is called now), and a microsecond before impact disengage and tell the driver "You handle this!"
 
  • #96
Oldman too said:
https://www.channelnewsasia.com/commentary/ai-legal-liability-boeing-tesla-uber-car-crash-2828911
I'm not familiar with this site but the story seems to follow along the lines you mention. Thought I'd throw it on the pile.

"Data gathered from 16 crashes raised concerns over the possibility that Tesla’s artificial intelligence (AI) may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact."
That's not what I meant. What I meant was because the car is sentient, it has to hire its own lawyer.
Vanadium 50 said:
So a car can point itself at a bunch of kids, accelerate to a zillion miles an hour in Crazy Mode (or whatever it is called now), and a microsecond before impact disengage and tell the driver "You handle this!"
They're trying that argument, but I don't think it'll work. What might work though is "even though we're calling this 'autopilot' in advertisements it clearly says in the manual that it isn't autopilot, so the driver is always responsible for control even while 'autopilot' is active."
 
  • Informative
Likes Oldman too
  • #97
russ_watters said:
What I meant was because the car is sentient, it has to hire its own lawyer.
Ah, I see. That would be a major shift in perspective, it has all the makings for a lawyers feast.
 
  • Like
Likes russ_watters
  • #98
Vanadium 50 said:
So a car can point itself at a bunch of kids, accelerate to a zillion miles an hour in Crazy Mode (or whatever it is called now), and a microsecond before impact disengage and tell the driver "You handle this!"
It does seem like a pretty shaky defense argument for the manufacturer.
 
  • #99
Jarvis323 said:
I realize that there are people of the opinion that such additional regulations are necessary. I disagree and think that the justifications are, in some cases, more harmful than simply applying existing standards. From the articles:

“ draw a clear distinction between features which just assist drivers, such as adaptive cruise control, and those that are self-driving”

This one seems particularly dangerous to me. Manufacturers will understand this line but consumers will not. Manufacturers will naturally build systems that stay just barely on the “assist drivers” side of the line and give the impression to consumers (in all but fine print) that the systems are on the self-driving side. Then they would use such regulations as a liability shield. IMO, it is preferable to leave this line undefined or non-existent and require the company to argue in-court why a particular feature should be considered as an assist. This is particularly dangerous because there is not enough experience by drivers to understand where drivers will draw this line.

“ introduce a new system of legal accountability once a vehicle is authorised by a regulatory agency as having self-driving features, and a self-driving feature is engaged, including that the person in the driving seat would no longer be a driver but a ‘user-in-charge’ and responsibility when then rest with the Authorised Self-Driving Entity (ASDE)”

This is unnecessary. The legal accountability for manufacturing and design flaws already resides with the manufacturer. Existing standards cover this, and calling it a “new system” is incorrect and ill advised.

“mandate the accessibility of data to understand fault and liability following a collision”

Such data can already be had by subpoena under existing rules.

So again, I do not see the need for these new rules. If existing rules are simply applied to AI it seems that all of the issues are already addressed, and in some cases the new proposals may actually be dangerous.
 
  • #100
FYI - a resource - Artificial Intelligence Policy: A Primer and Roadmap
https://lawreview.law.ucdavis.edu/issues/51/2/symposium/51-2_Calo.pdf

AI is simply a technology, a process, and its applications, like any piece of software, have broad implications with respect to its impact on society. So, of course, there is some need of regulation based on the application, especially where is involves impact on privacy, political systems, various economies, . . .
 
Last edited:

Similar threads

Replies
10
Views
4K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
7
Views
5K
Replies
4
Views
10K
  • · Replies 65 ·
3
Replies
65
Views
11K