Regulating AI: What's the Definition?

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The UK government has proposed new regulations for artificial intelligence that require AI systems to identify a legal person responsible for any issues that arise. This "pro-innovation" framework will be managed by existing regulators rather than a central authority, contrasting with EU plans. A significant point of discussion is the need for a clear definition of AI to effectively enforce these regulations, with some advocating for a broad interpretation. Concerns also arise regarding the transparency and explainability of AI systems, as understanding their decision-making processes can be complex. Ultimately, the regulation aims to ensure accountability and safety in AI deployment while navigating the challenges of legal liability.
  • #61
anorlunda said:
The problem is that if your training data set is big enough to cover 100% of the population, and the result still turns out to correlate to race.
Yes. Data bias can take many forms. While you could try to remove race to avoid some of this, the algorithms are very good at finding other ways of creating this situation. After all, their purpose is to learn to classify on their own. In this case, the AI could potentially pick up societal biases through other inputs that we might not even dream are related to a particular race or group of people. If there is systemic racism or bias, the algorithms will tend to pick up on it.
 
Computer science news on Phys.org
  • #62
anorlunda said:
The problem is that if your training data set is big enough to cover 100% of the population, and the result still turns out to correlate to race.
This is a certainty.

It's true trivially - a correlation of exactly zero is a set of measure zero.

But it's also true because of correlations we already know about. Mortgage rate default is correlated with income (or debt-to-income, if you prefer), unemploym,ent rate etc. These variables are also correlated with race. So of course race will show up as a factor.

The question is "what are we going to do about it?". The answer seems to be "blame the mathematics" because AI (or really this is more ML) is producing an answer we don't like.

One answer that our society has come up with is "well, let's just feed in all the information we have, except race." But of course the AI finds proxies for race elsewhere in the data - in the books we read, the food we eat, the movies we watch, and as mentioned earlier, our ZIP codes (and an even better proxy is our parents' ZIP codes). Of course it does. That's its job.

More generally, if we plot correlations of a zillion variables against a zillion other variables, we are going to find some we don't like. What are we going to do about that?
 
  • Like
Likes Borg
  • #63
Rive said:
I don't think that that would ever work. Validating a complex software is such a nightmare just as is, even without any 'learning' involved (which is supposed to be kind of a programmer-independent process, exactly to correct/prevent errors in judgement in the programming process) that on industrial level it's just - won't do.
I don't see a problem there. Can you clarify/elaborate?
It's just the AI in discussion is about interacting with unclear factors (human beings).
Kind of like omitting the fact from the decision of the jury about you shooting somebody that whether you were at gunpoint or not?
I don't understand what point you are trying to make. Can you clarify/elaborate on what I'm omitting/why it matters?
 
  • #64
Rive said:
Based on the approach you wrote about I have a feeling that either the AI you are talking about and the AI which is 'in the air' is slightly different things or yours is working with some really constrained datasets.
AI is a very broad class of technologies, and we use many different ones for different purposes. My company’s experience using AI is highly relevant to a discussion regarding actual companies using actual AI technology in actual safety-critical products within actual regulatory environments. Our current real-world experience directly contradicts your claim that holding AI products to the same product safety standards as every other engineer and engineering company would never work. AI products should not get a pass on safety and in our industry they do not. They are held to the same standards as our non-AI software, and so far it works.
 
  • Like
Likes russ_watters
  • #65
Dale said:
Our current real-world experience directly contradicts your claim that holding AI products to the same product safety standards as every other engineer and engineering company would never work.
Sorry, but the only thing I will answer with is exactly those lines you quoted from me.
Rive said:
Based on the approach you wrote about I have a feeling that either the AI you are talking about and the AI which is 'in the air' is slightly different things or yours is working with some really constrained datasets.

I have great respect for that kind of mindset by the way: but it has some really severe limitations and impacts, quite incompatible with the current hustle and bustle around AI.
 
  • #66
Rive said:
I have great respect for that kind of mindset by the way: but it has some really severe limitations and impacts, quite incompatible with the current hustle and bustle around AI.
Are you trying to say that the current definition/status of AI is too broad/soft, so current experiences are not relevant? And under a narrower/harder definition things might be different? If so, sure, but there will be a clear-cut marker even if not a clear-cut threshold for when the shift in liability happens: legal personhood.
 
  • #67
Rive said:
the only thing I will answer with is exactly those lines you quoted from me
Not exactly a productive conversational approach, but whatever. You are just hoping to exclude a clear counterexample to your argument with no sound basis for doing so.
 
  • #68
Dale said:
They are held to the same standards as our non-AI software, and so far it works.
I agree, but the topic in this thread is special regulations that treat AI differently than non-AI stuff. AI is irrelevant in negligence tort claims.
 
  • #69
anorlunda said:
I agree, but the topic in this thread is special regulations that treat AI differently than non-AI stuff. AI is irrelevant in negligence tort claims.
Right, my position is that such regulations seem unnecessary to me. At least in my field, the existing regulations seem sufficient.
 
  • Like
Likes anorlunda
  • #70
Vanadium 50 said:
One answer that our society has come up with is "well, let's just feed in all the information we have, except race." But of course the AI finds proxies for race elsewhere in the data - in the books we read, the food we eat, the movies we watch, and as mentioned earlier, our ZIP codes (and an even better proxy is our parents' ZIP codes). Of course it does. That's its job.

You can also choose your data+features so that the model becomes biased against a group intentionally.

But you can also choose your data+features so that the model is fair as I explained in post 54. The only problem is you need enough data, and you would have to restrict the number of features you use so that you are able to sufficiently stratify.

So the question is, should we regulate how companies choose their training data? Should they be required to be transparent about their training data, training pipeline and validation? Should their data+features need to pass a test that proves fairness?
 
Last edited:
  • #71
russ_watters said:
Are you trying to say that the current definition/status of AI is too broad/soft, so current experiences are not relevant?
The AI applications which brought up the issue are about on the fly decisions in a complex (hard or can't be algorithmized) environment (often messed up with human or other irregular interaction/intervention), like: piloting cars.

Above a complexity once you can no longer properly formulate the question then you can't design a test to thoroughly validate your product. Software 'industry' long knows that: even if things started from the most strict requirements and guarantees, that's long past for most. Most of the things sold comes with no warranties. None at all. (And this area is still considered deterministic :doh: )

So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.
Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
Of course, different compromises for different businesses. I'm perfectly happy to have medical instruments/applications having some AIs 'the hard way'.

russ_watters said:
And under a narrower/harder definition things might be different? If so, sure, but there will be a clear-cut marker even if not a clear-cut threshold for when the shift in liability happens: legal personhood.
Although I expect some accidents/cases to happen in the following decades which retrospectively may be characterized as preliminary conscience or something like that from the far future (let's give that much for Sci-Fi o0) ) I think it'll be just statistics and (mandatory) insurances (as in monetary sense) in this round.
 
Last edited:
  • #72
Rive said:
So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.
Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
Of course, different compromises for different businesses. I'm perfectly happy to have medical instruments/applications having some AIs 'the hard way'.
So the question is why does AI require any special rules in this regard?

Current products of all kinds generally do not meet the hard requirements, nor are they expected to. Existing liability and insurance practices compensate injured consumers and financially incentivize manufacturers to prioritize safety. Manufacturers already evaluate real life statistics. Different types of businesses already have different regulatory environments and requirements.

So I simply fail to see where AI needs special rules. All of the points you bring up are already addressed under the current system. AI simply needs to be held to the same standards as other devices.
 
  • Like
Likes Oldman too and russ_watters
  • #73
Rive said:
The AI applications which brought up the issue are about on the fly decisions in a complex (hard or can't be algorithmized) environment (often messed up with human or other irregular interaction/intervention), like: piloting cars.

Above a complexity once you can no longer properly formulate the question then you can't design a test to thoroughly validate your product...

So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.

Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
I find it very hard to extract a point from your posts. They are very verbose, with a lot of discussion and not much argument. The problem and thesis should each be able to be expressed in single sentences.

But now I'm gathering that my prior attempt at understanding your position was wrong. It now seems like you are saying that above a certain level of complexity (and autonomy/freewill) it becomes impossible to explain or test the "thought process" of AI. And therefore no liability can be traced back to the vendor. I agree with the first part, but think it is irrelevant to the second.

I'm back to agreeing with @Dale and find it weird to suggest that something that is already happening isn't going to work. And looking forward, even if there is a complaint about the complexity or application (neither of which I'm clear on from what Dale is saying about his company), I don't think it matters. Right up to the point where legal personhood for AI gets put on the table, nothing changes in terms of liability/responsibility. It's either the manufacturer or the user, until its the AI. "Nobody because we don't know why the AI made that decision so we can't say it's the fault of the manufacturer" isn't an option. It always is the responsibility of the manufacturer. It has to be. And "I made an algorithm I can't explain/test" doesn't detach the liability -- It's pretty much an admission of guilt!
 
  • Like
Likes Dale
  • #74
russ_watters said:
I find it very hard to extract a point from your posts. They are very verbose, with a lot of discussion and not much argument. The problem and thesis should each be able to be expressed in single sentences.

But now I'm gathering that my prior attempt at understanding your position was wrong. It now seems like you are saying that above a certain level of complexity (and autonomy/freewill) it becomes impossible to explain or test the "thought process" of AI. And therefore no liability can be traced back to the vendor. I agree with the first part, but think it is irrelevant to the second.

I'm back to agreeing with @Dale and find it weird to suggest that something that is already happening isn't going to work. And looking forward, even if there is a complaint about the complexity or application (neither of which I'm clear on from what Dale is saying about his company), I don't think it matters. Right up to the point where legal personhood for AI gets put on the table, nothing changes in terms of liability/responsibility. It's either the manufacturer or the user, until its the AI. "Nobody because we don't know why the AI made that decision so we can't say it's the fault of the manufacturer" isn't an option. It always is the responsibility of the manufacturer. It has to be. And "I made an algorithm I can't explain/test" doesn't detach the liability -- It's pretty much an admission of guilt!

There are two regulatory approaches. The first is you require testing to try to prevent harm before it happens. The second is you don't require testing, but once harm is done, the party that is harmed can sue and then hopefully the company at fault and other related companies adjust their safety assurance policy after having new information to inform their cost benefit analysis. The first perhaps includes the second as well.

For a car accident, which can cause significant material damage, injury, or death, it is a pretty clear case. Those are obvious risks we take seriously, and inevitably there will be safety regulations and testing requirements pertaining to the specifics of designs, including the AI operation, just as a there are with crash testing, airbags, etc. Due to the complexity, accidents will still happen, and people can take the company to court if they have a case. The complexity of the interactions between persons and the AI might complicate that, because blame can be shifted away from the AI and onto another driver, bad luck (e.g. deer in the road), maybe on the condition of the vehicle depending on how it was maintained, etc. Anticipating these complications, imposing testing to prevent them, and deciding in advance what the rules are in specific cases of failure makes a lot of sense. This isn't really fundamentally new regulation, it is just additional regulation pertaining specifically to new things. How we regulate different things already depends on what those things are.

In other cases it is new territory. For example, what if your digital assistant recommends you drink a bottle of bleach to prevent Covid-19? What if your digital assistant calls you a racial slur and says you deserve to die? Or what if it is advocating genocide like Bing had been essentially doing for years? It is legal for a person to espouse Neo-Nazi ideology (in some countries) as we have free speech. But how about a digital assistant that millions or billions of people use? Is there a law against that and should there be? Do we wait until people get around to suing the company to prevent it? Do they have a case? Why wasn't Microsoft sued for Bing's actions and why did those problems persist so long? Did Bing help create a new generation of Neo-Nazi's? How can you tell? Does it matter legally?

And if liability were so straightforward with digital products, then why do almost all of us have to agree to lengthy terms and conditions which we don't read. How many of you that have IPads have read the 500+ page terms and conditions you agreed to?

7.2 YOU EXPRESSLY ACKNOWLEDGE AND AGREE THAT, TO THE EXTENT PERMITTED BY
APPLICABLE LAW, USE OF THE APPLE SOFTWARE AND ANY SERVICES PERFORMED BY OR
ACCESSED THROUGH THE APPLE SOFTWARE IS AT YOUR SOLE RISK AND THAT THE ENTIRE RISK
AS TO SATISFACTORY QUALITY, PERFORMANCE, ACCURACY AND EFFORT IS WITH YOU.

7.3 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE APPLE SOFTWARE AND
SERVICES ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT
WARRANTY OF ANY KIND ...

https://www.apple.com/legal/sla/docs/iOS15_iPadOS15.pdf

The truth is that this is messy, impacts are high and lasting, and timeliness matters. Waiting for multi-billion dollar companies to be defeated in court over serious and anticipated problems we haven't encountered yet in human civilization can be wasteful and damaging. The problems with Bing could have been fixed in 5 minutes if a simple test was done, or if there were simple regulations requiring them to prevent the problem in the first place. But it lasted years.

In some speculatively plausible cases, as time moves forward, risks could even involve human extinction. Good luck suing Elon Musk if he creates a gray goo meant to terraform Mars that malfunctions and consumes our biosphere.

Then you have even fuzzier cases, like social media algorithms causing widespread depression, and causing spikes in suicide rates. Is anyone to blame? If we know the algorithm is causing it, should we require it be fixed, or is there any liability? Suppose the AI has learned that depression correlates with impulsive purchasing and so it seeks to cause depression on purpose?

How about in the even fuzzier case where social media algorithms act as echo chambers, feed you things that press your buttons, and overall bring out the worst in people to drive engagement. These issues will have major impacts on society and our future but what can we do about it? Maybe we could try building some kind of regulatory framework for mitigating psychological and sociological impacts, but at that point it looks like a path from a scifi story that we've been warned of.

Anyways, my concerns with AI and regulation are mostly about the power of information/disinformation and how it is wielded with AI. These are serious and hard problems. There must be some way to prevent some of the huge negative impacts we can already anticipate.
 
Last edited:
  • Like
Likes Rive
  • #75
Jarvis323 said:
Did Bing help create a new generation of Neo-Nazi's?
Aren't people responsible for their own actions?
Like in free will.

Jarvis323 said:
serious and anticipated problems we haven't encountered yet in human civilization can be wasteful and damaging
The car took over from the horse.
Compare the accidents from use of the horse for transportation and work - from skittish horses kicking people, broken backs from being thrown off from a horse's back, horses rampaging wildly in fright down a street pulling a cart or buggy.
The car eliminated most of those events, and a new set of 'unforeseen' accidents arose.
Much like you have reiterated with social media with its unforeseen problems.
 
  • Like
Likes russ_watters
  • #76
Dale said:
So the question is why does AI require any special rules in this regard?
I've just noticed that this whole discussion is quite like that never finished debate way back between the philosophy of big iron and the PC world during the first boom of the latter.
 
  • #77
Jarvis323 said:
There are two regulatory approaches
And both approaches are already in use. Why should AI be exempt from either approach (or are you saying that AI necessitates a third approach)?

Jarvis323 said:
The complexity of the interactions between persons and the AI might complicate that, because blame can be shifted away from the AI
Sure, that is part of any product safety lawsuit and should be.

Jarvis323 said:
For example, what if your digital assistant recommends you drink a bottle of bleach to prevent Covid-19?
What if your non-AI based software recommends you drink bleach? Does the fact that the software is or is not based on AI change any of the issues at hand? No.

Note here that the type of product matters just as the type of person matters. If your secretary tells you to drink bleach it is free speech for which you can fire them. If your doctor tells you to drink bleach it is malpractice for which you can sue them. Similarly, you would expect that a digital “secretary” would be held to a different standard than a digital “doctor”, regardless of whether they are based on AI or traditional algorithms.

So again, I fail to see the need for AI-specific regulations here.

Jarvis323 said:
Or what if it is advocating genocide like Bing had been essentially doing for years?
Again, if I write non-AI software to advocate genocide is it acceptable? It seems like your focus on AI is misplaced here too. I am not convinced that promotion of misinformation or malfeasance constitutes “harm”, but if we as a society decide that it does then wouldn’t we want to regulate against such harm whether it arrives via an AI or via some other source?

I am just not seeing anything in the arguments presented by you or anyone else that really seems to support the need to regulate AI differently from other products.
 
  • #78
I don't really think more explanation will make any difference at this point, but decent part of the last few pro-regulation comments were about users (not owners) paying up the insurance (or taking responsibility in some other form) for the (intellectual) property of somebody else installed/used through their property (car, computer, smartgadget, online account*, whatever).

* well, for that kind of thing the user won't even have any real property to talk about...
 
Last edited:
  • #79
Dale said:
I am just not seeing anything in the arguments presented by you or anyone else that really seems to support the need to regulate AI differently from other products.

I guess we need to define differently. I argued before that different things are already regulated differently, and AI will bring new things with unique risks. It will also span every sector and include products, services, and more. I doubt any of us have the knowledge to be able to speak broadly about all forms of regulation that exist (you would probably not be able to read all of those documents in your lifetime), or all possible applications of AI and their risks.

In the end, I think we are having a sort philosophical debate like the one in QM. My view is, whatever your interpretation, in the end we need to do the math and get the results, and AI presents us with a lot of new math.
 
Last edited:
  • #80
Jarvis323 said:
AI will bring new things with unique risks. It will also span every sector and include products, services, and more.
Certainly that is possible, in the future, but it isn’t the case today as far as I can see. Nobody has brought up a single concrete case where existing standards are actually insufficient.

Jarvis323 said:
I think we are having a sort philosophical debate
Philosophically I am opposed to regulations that are not demonstrably necessary.
 
  • Like
Likes russ_watters
  • #81
Dale said:
Nobody has brought up a single concrete case where existing standards are actually insufficient.
I would disagree this that. But I would add that nobody has come up with a convincing argument that existing standards are sufficient either.
 
  • #82
Jarvis323 said:
I would disagree this that.
So which concrete case was brought up where existing standards were insufficient for AI? I missed it.
 
  • #83
Dale said:
So which concrete case was brought up where existing standards were insufficient? I missed it.
It depends on your opinion about sufficiency. You can go back and look at some of the examples we've discussed: search engines and recommendation systems, data driven systems of all kinds which discriminate based on race, digital assistants, social media, self-driving cars, self replicating machines for terraforming planets.
 
  • #84
Jarvis323 said:
It depends on your opinion about sufficiency
How about your opinion? Which concrete example are existing standards specifically insufficient for AI?

Jarvis323 said:
self replicating machines for terraforming planets
Let’s please stick with products that actually exist or are in development today. Not hypotheticals, but concrete current examples.
 
  • Like
Likes Vanadium 50
  • #85
Dale said:
How about your opinion? Which concrete example are existing standards specifically insufficient for AI?

For all of the ones I listed I think existing regulations are not sufficient.

Dale said:
Let’s please stick with products that actually exist or are in development today. Not hypotheticals, concrete actual examples.
This is an example where regulation is crucial to be implemented before the harm is caused.
 
  • #86
I would add that I disagree with you about the case for the biomedical products. Although I can't say that I know what new different standards for regulation have already been implemented and how much is future work. This article does a good job explaining.

The FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies. Under the FDA’s current approach to software modifications, the FDA anticipates that many of these artificial intelligence and machine learning-driven software changes to a device may need a premarket review.On April 2, 2019, the FDA published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” that describes the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications.

fig1.png


fig2.png


https://www.fda.gov/medical-devices...-and-machine-learning-software-medical-device

And the same problem (adaptive updates/learning) applies for a range of other products.
 
Last edited:
  • #87
Jarvis323 said:
For all of the ones I listed I think existing regulations are not sufficient.
OK, so let’s start with self driving cars. In what specific way do you think existing product safety standards and the usual legal process are insufficient if they are applied to AI the same as to other components of the car?
 
  • #88
Jarvis323 said:
On April 2, 2019, the FDA published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” that describes the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications.
I have seen a bunch of papers like this over the last year as part of my company's responsible AI initiative and its dealings with other organizations. The FDA is attempting to describe how to implement something that is frequently referred to as ML Ops which differs greatly from the usual CI/CD processes in non-ML software. They understand how they've previously approved non-ML software and medical devices but now they need to update their process for ML. The paper is basically saying two things - here's what we think needs to be done and what did we miss or don't understand. The "Request for Feedback" in the title and list of questions at the end of the document are especially telling.
 
Last edited:
  • Like
Likes Oldman too
  • #89
Vanadium 50 said:
There is a lot about "unexplainable". In many - possibly most - cases, the actual determination is deterministic. The probability of a loan default is a_1 * income + a_2 * average number of hamburgers eaten in a year * a_3 * number of letters of your favorite color + ... The AI training process produces the a's, and that's a black box, but the actual calculation in use is transparent.
I've done a few recent image recognition projects where the calculation in use was not transparent, because the ML model updated itself with new data and there was no mechanism for it to explain or export the process it used to arrive at the black box answer. Submitting the same data often did not arrive at the same answer, especially if you compared between early versions of the model and later versions of the model.
 

Similar threads

Replies
10
Views
4K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
7
Views
5K
Replies
4
Views
10K
  • · Replies 65 ·
3
Replies
65
Views
11K