Regulating AI: What's the Definition?

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The UK government has proposed new regulations for artificial intelligence that require AI systems to identify a legal person responsible for any issues that arise. This "pro-innovation" framework will be managed by existing regulators rather than a central authority, contrasting with EU plans. A significant point of discussion is the need for a clear definition of AI to effectively enforce these regulations, with some advocating for a broad interpretation. Concerns also arise regarding the transparency and explainability of AI systems, as understanding their decision-making processes can be complex. Ultimately, the regulation aims to ensure accountability and safety in AI deployment while navigating the challenges of legal liability.
  • #31
Office_Shredder said:
If ai cars are safer, insurance companies can conclude that on their own and charge less, hence encouraging people to switch. Why are you making this more complicated than it already is?we literally need to do nothing
I agree. In my opinion the issue is that most software engineers and software engineering companies, unlike all other engineers, are not used to thinking about and prioritizing safety. All we need to do is hold them to the same product safety standards as every other engineer and engineering company.
 
  • Like
Likes Grinkle and Oldman too
Computer science news on Phys.org
  • #32
Oldman too said:
https://deepai.org/publication/a-legal-definition-of-ai

[Recommendation.
Policy makers should not use the term "artificial intelligence" for regulatory purposes. There is no definition of AI which meets the requirements for legal definitions. Instead, policy makers should adopt a risk-based approach: (1) they should decide which specific risk they want to address, (2) identify which property of the system is responsible for that risk and (3) precisely define that property. In other words, the starting point should be the underlying risk, not the term AI.]

I agree that the term AI is hard to pin down as a catch all for the systems we need to regulate. Furthermore, the national initiative is to accelerate the integration of AI into all private and public sectors. That is a broad set of cases each with different risks and contexts, and regulatory changes that pertain to a particular context need to be integrated into the existing regulatory frameworks . In some cases, there is no established regulatory framework since the AI technology is operating in new territory.

The current legal challenge, in my opinion, is that liability depends on knowledge and reason, yet deep learning is not explainable. This can make identifying what went wrong, and the persons responsible, difficult to impossible. In order to hold someone responsible, they would need to have either broken the law, or at least have access to a quantitative analysis of the risks. In many cases, quantitative analysis of the risks is not required. With a lack of requirement, there can be disincentive to even do a risk assessment, because knowing the risk makes you more liable. The solution seems obvious, require rigorous testing to determine the risks, as is already done in cases where risk can only be determined through testing.

The problems are that rigorous testing requirements can be argued to stifle innovation. AI systems can evolve constantly, even on the fly. Would an approval process need to be redone with each update to a deep learning system? Does it depend on context? For medical diagnosis? For a lawn mower? For a self driving car? For a search engine? For a digital assistant? For a weapon? For a stock market trading system?
 
Last edited:
  • #33
Vanadium 50 said:
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
It is interesting. People have this insanely inflated idea of what AI is capable of. We take an average intelligence teenage human, with average sensory and motor skills, and give them on the order of a hundred hours of experience and they can drive a car reasonably safely. We give the best AI, with the best sensors, and the best actuators and give them on the order of a million hours of experience and they struggle with that same task. The human brain is an amazing learning machine that we don’t yet understand and cannot yet artificially replicate or exceed.
 
Last edited:
  • Like
Likes russ_watters, BWV, Borg and 1 other person
  • #34
Vanadium 50 said:
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
What if it is legal for the AI to drive your car, and the AI is a statistically better/safer driver than a human being?
 
  • #35
Dale said:
It becomes simply a manufactured device doing what the manufacturer designed it to do. Thus the manufacturer has liability for any harm caused by its design.

It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
 
  • Wow
  • Skeptical
Likes russ_watters and phinds
  • #36
Jarvis323 said:
It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
You should watch "Airplane Disasters" on TV. The number of plane accidents that remain unexplained after the investigators finish is somewhere between few and none. The is no reason why the same could not be applied to AI things, so I seriously doubt your contention that an accident is "not explainable".
 
  • #37
phinds said:
You should watch "Airplane Disasters" on TV. The number of plane accidents that remain unexplained after the investigators finish is somewhere between few and none. The is no reason why the same could not be applied to AI things, so I seriously doubt your contention that an accident is "not explainable".

Deep learning models can have billions of interdependent parameters. Those parameters are set by the back-propagation algorithm as it learns from data. The intractability of understanding a multi-billion parameter non-linear model aside, the back-propegation process which parameterizes it is a proven chaotic process. Say such a system makes a decision that results in an accident. How do you propose to try and explain it? What kind of human choices do you suppose can be made accountable? Would you subpoena all of the training data? Then would you comb through the data, and compare it with the multi-billion parameter model and come to a conclusion that something in the data was the problem, and someone should have known that the emergent model would have made a particular decision under a particular set of conditions after the training process?
 
Last edited:
  • #38
Jarvis323 said:
What if it is legal for the AI to drive your car, and the AI is a statistically better/safer driver than a human being?
We'll burn that bridge when we come to it.
 
  • Haha
  • Like
Likes Dale and Borg
  • #39
The article linked in #1 about the EU proposal says, "Make sure that AI is appropriately transparent and explainable"

I think the requirement is there because of potential race bias, not because of accidents.

White person A is approved for credit, but black person B is denied. Why?
A test on 10000 people shows that whites are approved more often. Transparent and explainable are needed to prove absence of racism. (Think of the bank red lining scandals.)

It is also my understanding that no neural net is explainable in that sense. That makes an explainability requirement a very big deal. Scrubbing training data to exclude racial bias still would not provide explanations of the results.

Consider the facial authorization application. A nxn pixel image is input, it is compared to a nxn reference image. There are two outputs, 1) Yes the two images are of the same person, or 2) No, not the same person. How could it be modified to provide an explanation of the result? Why no for person X? Why are there different success rates for different races or genders or ages? Neural networks are unable to answer those why questions.
 
  • Like
Likes Jarvis323
  • #40
anorlunda said:
The article linked in #1 about the EU proposal says, "Make sure that AI is appropriately transparent and explainable"

I think the requirement is there because of potential race bias, not because of accidents.

White person A is approved for credit, but black person B is denied. Why?
A test on 10000 people shows that whites are approved more often. Transparent and explainable are needed to prove absence of racism. (Think of the bank red lining scandals.)

It is also my understanding that no neural net is explainable in that sense. That makes an explainability requirement a very big deal. Scrubbing training data to exclude racial bias still would not provide explanations of the results.

Consider the facial authorization application. A nxn pixel image is input, it is compared to a nxn reference image. There are two outputs, 1) Yes the two images are of the same person, or 2) No, not the same person. How could it be modified to provide an explanation of the result? Why no for person X? Why are there different success rates for different races or genders or ages? Neural networks are unable to answer those why questions.

I agree this is a big issue. I don't think explainability applies only to this kind of case though. AI systems are going to control our systems in every sector. In all cases where something could go wrong, too many to list, explainability matters.
 
Last edited:
  • #41
Jarvis323 said:
It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event.
In traditional products it is generally not sufficient for a company to not know that a product is dangerous. They need to know that it is safe.

Jarvis323 said:
The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to?
There are not just regulations, there are laws. Product safety issues are not just regulatory issues, they are also criminal negligence and liability issues. That the risks were unknown is generally taken to mean that the company’s product safety testing protocols were insufficient. In consumer safety ignorance is not typically a sound defense.

Although there are criminal statutes, by far the bigger legal threat to traditional engineering companies is civil litigation. Such litigation depends much more on the the harm received than on any knowledge by the company of the risk.

Jarvis323 said:
Or you can chalk it up to bad luck.
Humans generally prefer to fix blame, including humans serving on juries.
 
  • #42
Jarvis323 said:
AI systems can evolve constantly, even on the fly.
I honestly, myself, believe your point there is one of the biggest factors to consider, especially in the long term. Rather than a "fix" for the problem, there will need to be some form of constant monitoring and "upgrading" of the fix. By that, I'm thinking that regulating AI, especially the legal aspects will have to be a dynamic, always adapting to new situations process. (It's unfortunate that the best candidate for that job is AI, how's that for irony?) After chasing through the links cited in the openai piece, its obvious this is a can of worms without a gordian knot solution. As an afterthought, it does guarantee this thread a long life though.
Jarvis323 said:
Would an approval process need to be redone with each update to a deep learning system? Does it depend on context? For medical diagnosis? For a lawn mower? For a self driving car? For a search engine? For a digital assistant? For a weapon? For a stock market trading system?
Very good questions, this is another facet of the can of worms issue. I'm more qualified as a spectator in this thread than anything so I'm better off following the conversation than giving serious answers.
 
  • #43
Jarvis323 said:
Would an approval process need to be redone with each update to a deep learning system?
I hadn’t seen this comment before.

Just so that you can understand my perspective, let me explain my relevant background. I currently work for a global medical device manufacturer, one that has one of the largest number of patents in applying AI technologies to medical products. I am not currently in one of our R&D groups, but I was for 14 years and I worked on more than one AI-related project.

Indeed, the regulatory process needs to be redone with each update to any of our AI systems. Our AI technologies are regulated the same as all of our other technologies. As a result our development is slow, incremental, careful, and thoroughly tested, as it should be, for AI just like for any other technology we incorporate into our medical devices.
 
  • Like
  • Informative
Likes Grinkle, Oldman too and Jarvis323
  • #44
Dale said:
I hadn’t seen this comment before.

Just so that you can understand my perspective, let me explain my relevant background. I currently work for a global medical device manufacturer, one that has one of the largest number of patents in applying AI technologies to medical products. I am not currently in one of our R&D groups, but I was for 14 years and I worked on more than one AI-related project.

Indeed, the regulatory process needs to be redone with each update to any of our AI systems. Our AI technologies are regulated the same as all of our other technologies. As a result our development is slow, incremental, careful, and thoroughly tested, as it should be, for AI just like for any other technology we incorporate into our medical devices.
I think that in the case of medical devices, and also self driving cars, the issue is somewhat straightforward because the risks are taken seriously, and there are existing regulatory frameworks which can be mostly relied on already. But other sectors can be much fuzzier.
 
  • #45
Jarvis323 said:
It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
Yes that's how liability works/no it can't be chalked-up to bad luck. And yes, the flaw in the design is obvious: the AI made a bad decision therefore the AI is faulty. That's the entire point of AI: it makes its own decisions.

I agree with Dale here, which is why I think this entire discussion is much ado about nothing. At some point maybe AI will be granted personhood. Until then there is no implication for product liability law in AI.
 
  • Like
Likes Dale
  • #46
anorlunda said:
think the requirement is there because of potential race bias
That's kind of the point.

Say you are a mortgage lender. You can't legally use race in the decision process - but from a purely dollar basis, you'd like to. Race and default rates are correlated, but one can't legally use this correlation. So, if you a zillion dollar bank, what do you do?

You build a model where the only inputs are perfectly legal, and you don't try to infer race directly - but if the model comes up with an output that has a race-based correlation, well, what's a banker to do? He has plausible deniablity.

In our privacy-free world, getting this information is easier than it should be: phone records, groceries, magazine subscriptions, other purchases...
 
  • Informative
Likes Oldman too
  • #47
russ_watters said:
Yes that's how liability works/no it can't be chalked-up to bad luck. And yes, the flaw in the design is obvious: the AI made a bad decision therefore the AI is faulty. That's the entire point of AI: it makes its own decisions.

I agree with Dale here, which is why I think this entire discussion is much ado about nothing. At some point maybe AI will be granted personhood. Until then there is no implication for product liability law in AI.

At the least, AI being unexplainable makes product liability law very messy and complex. Because, people will try to shift blame, and to get to the bottom of who is to blame, people will try to determine what went wrong. Then you have the consequence that such cases could be very costly and drawn out. And that can strain the legal system, as well as make for situations where one side is forced to forfeit because they can't afford to proceed.
 
  • #48
Vanadium 50 said:
That's kind of the point.

Say you are a mortgage lender. You can't legally use race in the decision process - but from a purely dollar basis, you'd like to.
What if the mortgage lender doesn't want to use race in the decision process but the AI decides it's a good metric anyway?
 
  • #49
Jarvis323 said:
At the least, AI being unexplainable makes product liability law very messy and complex. Because, people will try to shift blame, and to get to the bottom of who is to blame, people will try to determine what went wrong.
I don't think that's true. You don't have to unravel every bit of the decision making process in order to judge whether the decision was faulty. Why the AI made its wrong decision doesn't matter. Liability doesn't hinge on whether the decision was accidental or on purpose, it just depends on whether the decision was bad/good.
 
  • #50
  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability

Above are the principles that they are guiding their regulation proposal.

https://www.gov.uk/government/news/...tion-and-boost-public-trust-in-the-technology

The first 2 principles require testing/approval/certification. Sure this is already done in some cases where a self learning AI replaces a person or operates a device which used to be operated by an explicit algorithm. But not all.

The 3rd is arguably impossible in an absolute sense, but can be strived for. Testing requirements can help here as well.

The 4th, fairness, we have discussed. It can also be helped by testing.

The 5th and 6th (identify legal person responsible and clarify routes to redress contestability) are crucial if you want to avoid messy and costly legal battles.
 
  • #51
Jarvis323 said:
people will try to shift blame
Of course they will try. They would have some pretty terrible lawyers if they didn’t even try to shift blame.

But it is very difficult for a manufacturer who has produced a product that did serious harm to actually convince a jury that they are not to blame. It can happen on occasion, but that is very much the exception rather than the rule. This is why companies are usually the ones that offer a settlement and plaintiffs are usually the ones that threaten to go to trial.
 
  • #52
russ_watters said:
What if the mortgage lender doesn't want to use race in the decision process but the AI decides it's a good metric anyway?
The AI doesn't know about race. Ironically, if you want a race-neutral process, you need to feed that into the AI training: you can't be race-blind and race-neutral.
 
  • Like
Likes russ_watters and Dale
  • #53
There is a lot about "unexplainable". In many - possibly most - cases, the actual determination is deterministic. The probability of a loan default is a_1 * income + a_2 * average number of hamburgers eaten in a year * a_3 * number of letters of your favorite color + ... The AI training process produces the a's, and that's a black box, but the actual calculation in use is transparent.

There are many good reasons to do this.
 
  • Informative
  • Like
Likes russ_watters and Oldman too
  • #54
Vanadium 50 said:
You build a model where the only inputs are perfectly legal, and you don't try to infer race directly - but if the model comes up with an output that has a race-based correlation, well, what's a banker to do? He has plausible deniablity.
It doesn't just apply to race. You could be a person with a 23 character name, who's birthday is on April 1st, who watches basketball, and who has a green fence. And the AI could determine this makes you more likely to default, because people with those features happen by chance to have defaulted more often than normal, and the AI just sees correlation, not causality.

A solution involves having sufficiently balanced and stratified training data. If you have stratified over all features well enough, then race shouldn't be predictable from the features for the training data. You could verify that. However, the combinatorial explosion of feature combinations makes this difficult to do perfectly if you use a lot of features. The more rare your set of features, the more prone you are to be the victom of a selection bias that leaks into the model.

If you could train a model with data that is properly stratified, and you can verify that the feature set is not a predictor of race within the training data, then the next step is to eliminate race as a feature in the model and theoretically it should be unlikely to have a racial bias. The next thing you do is test it.
 
Last edited:
  • #55
Dale said:
All we need to do is hold them to the same product safety standards as every other engineer and engineering company.
I don't think that that would ever work. Validating a complex software is such a nightmare just as is, even without any 'learning' involved (which is supposed to be kind of a programmer-independent process, exactly to correct/prevent errors in judgement in the programming process) that on industrial level it's just - won't do.

Vanadium 50 said:
"I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."
Involving 'smart' is a trap here. I don't need a 'smart car'. I need one which can drive better than I do. I don't care if it's dumb as a flatworm.

Dale said:
The human brain is an amazing learning machine that we don’t yet understand and cannot yet artificially replicate or exceed.
The human brain has a specific advantage, which is: choice. I won't drive out in a snowstorm, for example (I prefer to imagine that not much people would do). That alone prevents plenty of accidents to happen.
In this regards AIs will be in quite a disadvantage, I think 🤔

russ_watters said:
Why the AI made its wrong decision doesn't matter. Liability doesn't hinge on whether the decision was accidental or on purpose, it just depends on whether the decision was bad/good.
It's just the AI in discussion is about interacting with unclear factors (human beings).
Kind of like omitting the fact from the decision of the jury about you shooting somebody that whether you were at gunpoint or not?
 
  • #56
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

View attachment 304739

On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?

And then there was Cornelis Drebbel and his temperature-controlled oven, "one of the first manmade feedback mechanisms in history". (This was around 1620).

3086_a081cab429ff7a3b96e0a07319f1049e.png
 
  • #57
Rive said:
Kind of like omitting the fact from the decision of the jury about you shooting somebody that whether you were at gunpoint or not?
The gun in this case is the data and how the AI was trained. There are a host of questions that could be asked when investigating an algorithm. Just a few that come to mind at 5am:
  • What data was used? Was the data biased? If you have an AI that tells you where to send police cars to deter crime and it's basing its decision on where the most arrests occur, it will be a self-reinforcing algorithm since people tend to get arrested where the police are. Similarly, an algorithm will tell you that people default on mortgages more often in poorer neighborhoods. If your only data for black people is from a ghetto, it will decide that black people are a risk. But what about adding a zip code and leaving out race? You could still have problems with this.
  • What algorithm was used? What other algorithms were tried and what were their results? This could be helpful for the defense if they could show that they tried multiple paths to build the best possible algorithm.
  • Additionally, there are two major types of algorithms in machine learning - supervised and unsupervised. In supervised learning, you have data that is labeled with the correct classification (i.e. this is a picture of a cat). In unsupervised learning, that isn't the case and the algorithms are learning on their own. Picture a video game where you click randomly on the screen and your only reward is whether you rescue the princess or die. Algorithms trained in this manner can learn how to play video games very well. However, the data is not only the game inputs that were fed to the algorithm but also its randomly selected initial attempts and statistical tracking of results while attempting to win over millions of games. There are then parameters for randomly choosing the statistically correct actions or retrying what it currently believes to be statistically wrong ones in order to explore. How do you reproduce the 'data' in this case?
  • What are the acceptable operating inputs for the data? In the case of a mortgage, a defendant could argue that their AI rejected an applicant because of missing or incorrect information on the application. Also, consider this case with respect to self-driving cars - Hacking Stop Signs. Who is to blame here, if someone defaces a stop sign so that a self-driving car doesn't recognize it. Humans would still recognize stop signs in the examples that are given but there could just as easily be cases where the AI would still recognize the sign when humans wouldn't. The company would surely have a map with all of the stop signs in it but that is a maintainence task that never finishes. There will always be real world information that isn't up to date in the database.
 
Last edited:
  • #58
Borg said:
If your only data for black people is from a ghetto, it will decide that black people are a risk.
The problem is that if your training data set is big enough to cover 100% of the population, and the result still turns out to correlate to race.

The critics and regulators will never be satisfied with unbiased inputs (the training data), they will judge compliance by the outcome (the AI's results).
 
  • Like
Likes Oldman too and russ_watters
  • #59
Rive said:
I don't think that that would ever work.
It is already working in my company
 
  • #60
Dale said:
It is already working in my company
Based on the approach you wrote about I have a feeling that either the AI you are talking about and the AI which is 'in the air' is slightly different things or yours is working with some really constrained datasets.
 
Last edited:

Similar threads

Replies
10
Views
4K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
7
Views
6K
Replies
4
Views
10K
  • · Replies 65 ·
3
Replies
65
Views
11K