Regulating AI: What's the Definition?

  • Thread starter anorlunda
  • Start date
  • Tags
    Ai
  • Featured
In summary, the government has proposed new regulations for artificial intelligence that would require developers and users to identify a legal person responsible for any problems caused by AI. The proposed regime will be operated by existing regulators rather than a dedicated central body. These proposals were published as the Data Protection and Digital Information Bill is introduced to parliament. There is debate about the definition of AI and its transparency and explainability, as well as who should be held responsible for AI-related incidents. Examples, such as a chess playing robot breaking a child's finger, highlight the need for regulation. There are also concerns about liability in cases where AI is involved in accidents. However, some argue that anthropomorphizing AI is not helpful and that manufacturers should not be held liable for
  • #71
russ_watters said:
Are you trying to say that the current definition/status of AI is too broad/soft, so current experiences are not relevant?
The AI applications which brought up the issue are about on the fly decisions in a complex (hard or can't be algorithmized) environment (often messed up with human or other irregular interaction/intervention), like: piloting cars.

Above a complexity once you can no longer properly formulate the question then you can't design a test to thoroughly validate your product. Software 'industry' long knows that: even if things started from the most strict requirements and guarantees, that's long past for most. Most of the things sold comes with no warranties. None at all. (And this area is still considered deterministic :doh: )

So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.
Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
Of course, different compromises for different businesses. I'm perfectly happy to have medical instruments/applications having some AIs 'the hard way'.

russ_watters said:
And under a narrower/harder definition things might be different? If so, sure, but there will be a clear-cut marker even if not a clear-cut threshold for when the shift in liability happens: legal personhood.
Although I expect some accidents/cases to happen in the following decades which retrospectively may be characterized as preliminary conscience or something like that from the far future (let's give that much for Sci-Fi o0) ) I think it'll be just statistics and (mandatory) insurances (as in monetary sense) in this round.
 
Last edited:
Computer science news on Phys.org
  • #72
Rive said:
So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.
Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
Of course, different compromises for different businesses. I'm perfectly happy to have medical instruments/applications having some AIs 'the hard way'.
So the question is why does AI require any special rules in this regard?

Current products of all kinds generally do not meet the hard requirements, nor are they expected to. Existing liability and insurance practices compensate injured consumers and financially incentivize manufacturers to prioritize safety. Manufacturers already evaluate real life statistics. Different types of businesses already have different regulatory environments and requirements.

So I simply fail to see where AI needs special rules. All of the points you bring up are already addressed under the current system. AI simply needs to be held to the same standards as other devices.
 
  • Like
Likes Oldman too and russ_watters
  • #73
Rive said:
The AI applications which brought up the issue are about on the fly decisions in a complex (hard or can't be algorithmized) environment (often messed up with human or other irregular interaction/intervention), like: piloting cars.

Above a complexity once you can no longer properly formulate the question then you can't design a test to thoroughly validate your product...

So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.

Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
I find it very hard to extract a point from your posts. They are very verbose, with a lot of discussion and not much argument. The problem and thesis should each be able to be expressed in single sentences.

But now I'm gathering that my prior attempt at understanding your position was wrong. It now seems like you are saying that above a certain level of complexity (and autonomy/freewill) it becomes impossible to explain or test the "thought process" of AI. And therefore no liability can be traced back to the vendor. I agree with the first part, but think it is irrelevant to the second.

I'm back to agreeing with @Dale and find it weird to suggest that something that is already happening isn't going to work. And looking forward, even if there is a complaint about the complexity or application (neither of which I'm clear on from what Dale is saying about his company), I don't think it matters. Right up to the point where legal personhood for AI gets put on the table, nothing changes in terms of liability/responsibility. It's either the manufacturer or the user, until its the AI. "Nobody because we don't know why the AI made that decision so we can't say it's the fault of the manufacturer" isn't an option. It always is the responsibility of the manufacturer. It has to be. And "I made an algorithm I can't explain/test" doesn't detach the liability -- It's pretty much an admission of guilt!
 
  • Like
Likes Dale
  • #74
russ_watters said:
I find it very hard to extract a point from your posts. They are very verbose, with a lot of discussion and not much argument. The problem and thesis should each be able to be expressed in single sentences.

But now I'm gathering that my prior attempt at understanding your position was wrong. It now seems like you are saying that above a certain level of complexity (and autonomy/freewill) it becomes impossible to explain or test the "thought process" of AI. And therefore no liability can be traced back to the vendor. I agree with the first part, but think it is irrelevant to the second.

I'm back to agreeing with @Dale and find it weird to suggest that something that is already happening isn't going to work. And looking forward, even if there is a complaint about the complexity or application (neither of which I'm clear on from what Dale is saying about his company), I don't think it matters. Right up to the point where legal personhood for AI gets put on the table, nothing changes in terms of liability/responsibility. It's either the manufacturer or the user, until its the AI. "Nobody because we don't know why the AI made that decision so we can't say it's the fault of the manufacturer" isn't an option. It always is the responsibility of the manufacturer. It has to be. And "I made an algorithm I can't explain/test" doesn't detach the liability -- It's pretty much an admission of guilt!

There are two regulatory approaches. The first is you require testing to try to prevent harm before it happens. The second is you don't require testing, but once harm is done, the party that is harmed can sue and then hopefully the company at fault and other related companies adjust their safety assurance policy after having new information to inform their cost benefit analysis. The first perhaps includes the second as well.

For a car accident, which can cause significant material damage, injury, or death, it is a pretty clear case. Those are obvious risks we take seriously, and inevitably there will be safety regulations and testing requirements pertaining to the specifics of designs, including the AI operation, just as a there are with crash testing, airbags, etc. Due to the complexity, accidents will still happen, and people can take the company to court if they have a case. The complexity of the interactions between persons and the AI might complicate that, because blame can be shifted away from the AI and onto another driver, bad luck (e.g. deer in the road), maybe on the condition of the vehicle depending on how it was maintained, etc. Anticipating these complications, imposing testing to prevent them, and deciding in advance what the rules are in specific cases of failure makes a lot of sense. This isn't really fundamentally new regulation, it is just additional regulation pertaining specifically to new things. How we regulate different things already depends on what those things are.

In other cases it is new territory. For example, what if your digital assistant recommends you drink a bottle of bleach to prevent Covid-19? What if your digital assistant calls you a racial slur and says you deserve to die? Or what if it is advocating genocide like Bing had been essentially doing for years? It is legal for a person to espouse Neo-Nazi ideology (in some countries) as we have free speech. But how about a digital assistant that millions or billions of people use? Is there a law against that and should there be? Do we wait until people get around to suing the company to prevent it? Do they have a case? Why wasn't Microsoft sued for Bing's actions and why did those problems persist so long? Did Bing help create a new generation of Neo-Nazi's? How can you tell? Does it matter legally?

And if liability were so straightforward with digital products, then why do almost all of us have to agree to lengthy terms and conditions which we don't read. How many of you that have IPads have read the 500+ page terms and conditions you agreed to?

7.2 YOU EXPRESSLY ACKNOWLEDGE AND AGREE THAT, TO THE EXTENT PERMITTED BY
APPLICABLE LAW, USE OF THE APPLE SOFTWARE AND ANY SERVICES PERFORMED BY OR
ACCESSED THROUGH THE APPLE SOFTWARE IS AT YOUR SOLE RISK AND THAT THE ENTIRE RISK
AS TO SATISFACTORY QUALITY, PERFORMANCE, ACCURACY AND EFFORT IS WITH YOU.

7.3 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE APPLE SOFTWARE AND
SERVICES ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT
WARRANTY OF ANY KIND ...

https://www.apple.com/legal/sla/docs/iOS15_iPadOS15.pdf

The truth is that this is messy, impacts are high and lasting, and timeliness matters. Waiting for multi-billion dollar companies to be defeated in court over serious and anticipated problems we haven't encountered yet in human civilization can be wasteful and damaging. The problems with Bing could have been fixed in 5 minutes if a simple test was done, or if there were simple regulations requiring them to prevent the problem in the first place. But it lasted years.

In some speculatively plausible cases, as time moves forward, risks could even involve human extinction. Good luck suing Elon Musk if he creates a gray goo meant to terraform Mars that malfunctions and consumes our biosphere.

Then you have even fuzzier cases, like social media algorithms causing widespread depression, and causing spikes in suicide rates. Is anyone to blame? If we know the algorithm is causing it, should we require it be fixed, or is there any liability? Suppose the AI has learned that depression correlates with impulsive purchasing and so it seeks to cause depression on purpose?

How about in the even fuzzier case where social media algorithms act as echo chambers, feed you things that press your buttons, and overall bring out the worst in people to drive engagement. These issues will have major impacts on society and our future but what can we do about it? Maybe we could try building some kind of regulatory framework for mitigating psychological and sociological impacts, but at that point it looks like a path from a scifi story that we've been warned of.

Anyways, my concerns with AI and regulation are mostly about the power of information/disinformation and how it is wielded with AI. These are serious and hard problems. There must be some way to prevent some of the huge negative impacts we can already anticipate.
 
Last edited:
  • Like
Likes Rive
  • #75
Jarvis323 said:
Did Bing help create a new generation of Neo-Nazi's?
Aren't people responsible for their own actions?
Like in free will.

Jarvis323 said:
serious and anticipated problems we haven't encountered yet in human civilization can be wasteful and damaging
The car took over from the horse.
Compare the accidents from use of the horse for transportation and work - from skittish horses kicking people, broken backs from being thrown off from a horse's back, horses rampaging wildly in fright down a street pulling a cart or buggy.
The car eliminated most of those events, and a new set of 'unforeseen' accidents arose.
Much like you have reiterated with social media with its unforeseen problems.
 
  • Like
Likes russ_watters
  • #76
Dale said:
So the question is why does AI require any special rules in this regard?
I've just noticed that this whole discussion is quite like that never finished debate way back between the philosophy of big iron and the PC world during the first boom of the latter.
 
  • #77
Jarvis323 said:
There are two regulatory approaches
And both approaches are already in use. Why should AI be exempt from either approach (or are you saying that AI necessitates a third approach)?

Jarvis323 said:
The complexity of the interactions between persons and the AI might complicate that, because blame can be shifted away from the AI
Sure, that is part of any product safety lawsuit and should be.

Jarvis323 said:
For example, what if your digital assistant recommends you drink a bottle of bleach to prevent Covid-19?
What if your non-AI based software recommends you drink bleach? Does the fact that the software is or is not based on AI change any of the issues at hand? No.

Note here that the type of product matters just as the type of person matters. If your secretary tells you to drink bleach it is free speech for which you can fire them. If your doctor tells you to drink bleach it is malpractice for which you can sue them. Similarly, you would expect that a digital “secretary” would be held to a different standard than a digital “doctor”, regardless of whether they are based on AI or traditional algorithms.

So again, I fail to see the need for AI-specific regulations here.

Jarvis323 said:
Or what if it is advocating genocide like Bing had been essentially doing for years?
Again, if I write non-AI software to advocate genocide is it acceptable? It seems like your focus on AI is misplaced here too. I am not convinced that promotion of misinformation or malfeasance constitutes “harm”, but if we as a society decide that it does then wouldn’t we want to regulate against such harm whether it arrives via an AI or via some other source?

I am just not seeing anything in the arguments presented by you or anyone else that really seems to support the need to regulate AI differently from other products.
 
  • #78
I don't really think more explanation will make any difference at this point, but decent part of the last few pro-regulation comments were about users (not owners) paying up the insurance (or taking responsibility in some other form) for the (intellectual) property of somebody else installed/used through their property (car, computer, smartgadget, online account*, whatever).

* well, for that kind of thing the user won't even have any real property to talk about...
 
Last edited:
  • #79
Dale said:
I am just not seeing anything in the arguments presented by you or anyone else that really seems to support the need to regulate AI differently from other products.

I guess we need to define differently. I argued before that different things are already regulated differently, and AI will bring new things with unique risks. It will also span every sector and include products, services, and more. I doubt any of us have the knowledge to be able to speak broadly about all forms of regulation that exist (you would probably not be able to read all of those documents in your lifetime), or all possible applications of AI and their risks.

In the end, I think we are having a sort philosophical debate like the one in QM. My view is, whatever your interpretation, in the end we need to do the math and get the results, and AI presents us with a lot of new math.
 
Last edited:
  • #80
Jarvis323 said:
AI will bring new things with unique risks. It will also span every sector and include products, services, and more.
Certainly that is possible, in the future, but it isn’t the case today as far as I can see. Nobody has brought up a single concrete case where existing standards are actually insufficient.

Jarvis323 said:
I think we are having a sort philosophical debate
Philosophically I am opposed to regulations that are not demonstrably necessary.
 
  • Like
Likes russ_watters
  • #81
Dale said:
Nobody has brought up a single concrete case where existing standards are actually insufficient.
I would disagree this that. But I would add that nobody has come up with a convincing argument that existing standards are sufficient either.
 
  • #82
Jarvis323 said:
I would disagree this that.
So which concrete case was brought up where existing standards were insufficient for AI? I missed it.
 
  • #83
Dale said:
So which concrete case was brought up where existing standards were insufficient? I missed it.
It depends on your opinion about sufficiency. You can go back and look at some of the examples we've discussed: search engines and recommendation systems, data driven systems of all kinds which discriminate based on race, digital assistants, social media, self-driving cars, self replicating machines for terraforming planets.
 
  • #84
Jarvis323 said:
It depends on your opinion about sufficiency
How about your opinion? Which concrete example are existing standards specifically insufficient for AI?

Jarvis323 said:
self replicating machines for terraforming planets
Let’s please stick with products that actually exist or are in development today. Not hypotheticals, but concrete current examples.
 
  • Like
Likes Vanadium 50
  • #85
Dale said:
How about your opinion? Which concrete example are existing standards specifically insufficient for AI?

For all of the ones I listed I think existing regulations are not sufficient.

Dale said:
Let’s please stick with products that actually exist or are in development today. Not hypotheticals, concrete actual examples.
This is an example where regulation is crucial to be implemented before the harm is caused.
 
  • #86
I would add that I disagree with you about the case for the biomedical products. Although I can't say that I know what new different standards for regulation have already been implemented and how much is future work. This article does a good job explaining.

The FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies. Under the FDA’s current approach to software modifications, the FDA anticipates that many of these artificial intelligence and machine learning-driven software changes to a device may need a premarket review.On April 2, 2019, the FDA published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” that describes the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications.

fig1.png


fig2.png


https://www.fda.gov/medical-devices...-and-machine-learning-software-medical-device

And the same problem (adaptive updates/learning) applies for a range of other products.
 
Last edited:
  • #87
Jarvis323 said:
For all of the ones I listed I think existing regulations are not sufficient.
OK, so let’s start with self driving cars. In what specific way do you think existing product safety standards and the usual legal process are insufficient if they are applied to AI the same as to other components of the car?
 
  • #88
Jarvis323 said:
On April 2, 2019, the FDA published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” that describes the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications.
I have seen a bunch of papers like this over the last year as part of my company's responsible AI initiative and its dealings with other organizations. The FDA is attempting to describe how to implement something that is frequently referred to as ML Ops which differs greatly from the usual CI/CD processes in non-ML software. They understand how they've previously approved non-ML software and medical devices but now they need to update their process for ML. The paper is basically saying two things - here's what we think needs to be done and what did we miss or don't understand. The "Request for Feedback" in the title and list of questions at the end of the document are especially telling.
 
Last edited:
  • Like
Likes Oldman too
  • #89
Vanadium 50 said:
There is a lot about "unexplainable". In many - possibly most - cases, the actual determination is deterministic. The probability of a loan default is a_1 * income + a_2 * average number of hamburgers eaten in a year * a_3 * number of letters of your favorite color + ... The AI training process produces the a's, and that's a black box, but the actual calculation in use is transparent.
I've done a few recent image recognition projects where the calculation in use was not transparent, because the ML model updated itself with new data and there was no mechanism for it to explain or export the process it used to arrive at the black box answer. Submitting the same data often did not arrive at the same answer, especially if you compared between early versions of the model and later versions of the model.
 
  • #91
Jarvis323 said:
I guess we need to define differently. I argued before that different things are already regulated differently, and AI will bring new things with unique risks. It will also span every sector and include products, services, and more. I doubt any of us have the knowledge to be able to speak broadly about all forms of regulation that exist (you would probably not be able to read all of those documents in your lifetime), or all possible applications of AI and their risks.

In the end, I think we are having a sort philosophical debate like the one in QM. My view is, whatever your interpretation, in the end we need to do the math and get the results, and AI presents us with a lot of new math.
Yeah, I think this framing is pointlessly broad. Nobody would argue that a new product or potentially dangerous feature of an existing product doesn't potentially need new regulations. They frequently do. I'd call that "more of the same". But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates. If we ever get there, that might reduce their legal disclaimer to a single line: "This device is sentient and as such the manufacturer holds no liability for its actions." That would be very Different.
 
  • #92
russ_watters said:
Yeah, I think this framing is pointlessly broad. Nobody would argue that a new product or potentially dangerous feature of an existing product doesn't potentially need new regulations. They frequently do. I'd call that "more of the same". But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates. If we ever get there, that might reduce their legal disclaimer to a single line: "This device is sentient and as such the manufacturer holds no liability for its actions." That would be very Different.
Breeders/manufacturers aren't liable for dogs biting?
 
  • #93
Bystander said:
Breeders/manufacturers aren't liable for dogs biting?
Well, no, but I don't see the relevance here. Dogs aren't sentient in the sense I think we're discussing.
 
  • #94
russ_watters said:
But AI is supposed to be "Different" with a capital D, and the types of regulations I thought we were talking about were fundamentally Different ones, like the manufacturer is no longer liable for damage the AI creates.
https://www.channelnewsasia.com/commentary/ai-legal-liability-boeing-tesla-uber-car-crash-2828911
I'm not familiar with this site but the story seems to follow along the lines you mention. Thought I'd throw it on the pile.

"Data gathered from 16 crashes raised concerns over the possibility that Tesla’s artificial intelligence (AI) may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact."

 
  • #95
So a car can point itself at a bunch of kids, accelerate to a zillion miles an hour in Crazy Mode (or whatever it is called now), and a microsecond before impact disengage and tell the driver "You handle this!"
 
  • #96
Oldman too said:
https://www.channelnewsasia.com/commentary/ai-legal-liability-boeing-tesla-uber-car-crash-2828911
I'm not familiar with this site but the story seems to follow along the lines you mention. Thought I'd throw it on the pile.

"Data gathered from 16 crashes raised concerns over the possibility that Tesla’s artificial intelligence (AI) may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact."
That's not what I meant. What I meant was because the car is sentient, it has to hire its own lawyer.
Vanadium 50 said:
So a car can point itself at a bunch of kids, accelerate to a zillion miles an hour in Crazy Mode (or whatever it is called now), and a microsecond before impact disengage and tell the driver "You handle this!"
They're trying that argument, but I don't think it'll work. What might work though is "even though we're calling this 'autopilot' in advertisements it clearly says in the manual that it isn't autopilot, so the driver is always responsible for control even while 'autopilot' is active."
 
  • Informative
Likes Oldman too
  • #97
russ_watters said:
What I meant was because the car is sentient, it has to hire its own lawyer.
Ah, I see. That would be a major shift in perspective, it has all the makings for a lawyers feast.
 
  • Like
Likes russ_watters
  • #98
Vanadium 50 said:
So a car can point itself at a bunch of kids, accelerate to a zillion miles an hour in Crazy Mode (or whatever it is called now), and a microsecond before impact disengage and tell the driver "You handle this!"
It does seem like a pretty shaky defense argument for the manufacturer.
 
  • #99
Jarvis323 said:
I realize that there are people of the opinion that such additional regulations are necessary. I disagree and think that the justifications are, in some cases, more harmful than simply applying existing standards. From the articles:

“ draw a clear distinction between features which just assist drivers, such as adaptive cruise control, and those that are self-driving”

This one seems particularly dangerous to me. Manufacturers will understand this line but consumers will not. Manufacturers will naturally build systems that stay just barely on the “assist drivers” side of the line and give the impression to consumers (in all but fine print) that the systems are on the self-driving side. Then they would use such regulations as a liability shield. IMO, it is preferable to leave this line undefined or non-existent and require the company to argue in-court why a particular feature should be considered as an assist. This is particularly dangerous because there is not enough experience by drivers to understand where drivers will draw this line.

“ introduce a new system of legal accountability once a vehicle is authorised by a regulatory agency as having self-driving features, and a self-driving feature is engaged, including that the person in the driving seat would no longer be a driver but a ‘user-in-charge’ and responsibility when then rest with the Authorised Self-Driving Entity (ASDE)”

This is unnecessary. The legal accountability for manufacturing and design flaws already resides with the manufacturer. Existing standards cover this, and calling it a “new system” is incorrect and ill advised.

“mandate the accessibility of data to understand fault and liability following a collision”

Such data can already be had by subpoena under existing rules.

So again, I do not see the need for these new rules. If existing rules are simply applied to AI it seems that all of the issues are already addressed, and in some cases the new proposals may actually be dangerous.
 
  • #100
FYI - a resource - Artificial Intelligence Policy: A Primer and Roadmap
https://lawreview.law.ucdavis.edu/issues/51/2/symposium/51-2_Calo.pdf

AI is simply a technology, a process, and its applications, like any piece of software, have broad implications with respect to its impact on society. So, of course, there is some need of regulation based on the application, especially where is involves impact on privacy, political systems, various economies, . . .
 
Last edited:

Similar threads

Replies
10
Views
2K
Replies
1
Views
1K
Replies
7
Views
5K
  • Sci-Fi Writing and World Building
Replies
7
Views
2K
  • Science Fiction and Fantasy Media
Replies
2
Views
3K
Replies
8
Views
4K
  • Biology and Medical
Replies
2
Views
3K
  • General Discussion
Replies
29
Views
9K
  • General Discussion
2
Replies
38
Views
5K
  • General Discussion
2
Replies
40
Views
9K
Back
Top