Regulating AI: What's the Definition?

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
The UK government has proposed new regulations for artificial intelligence that require AI systems to identify a legal person responsible for any issues that arise. This "pro-innovation" framework will be managed by existing regulators rather than a central authority, contrasting with EU plans. A significant point of discussion is the need for a clear definition of AI to effectively enforce these regulations, with some advocating for a broad interpretation. Concerns also arise regarding the transparency and explainability of AI systems, as understanding their decision-making processes can be complex. Ultimately, the regulation aims to ensure accountability and safety in AI deployment while navigating the challenges of legal liability.
anorlunda
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Messages
11,326
Reaction score
8,750
https://www.lawgazette.co.uk/law/ar...es-to-require-human-liability/5113150.article
Artificial intelligence systems will have to identify a legal person to be held responsible for any problems under proposals for regulating AI unveiled by the government today.

The proposed 'pro innovation' regime will be operated by existing regulators rather than a dedicated central body along the lines of that being created by the EU, the government said.

The proposals were published as the Data Protection and Digital Information Bill, which sets out an independent data protection regime, is introduced to parliament. The measure will be debated after the summer recess.

It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

1658674012040.png


On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?
 
  • Informative
  • Like
Likes atyy, russ_watters and Greg Bernhardt
Computer science news on Phys.org
Yeah, that whole thing is a bag of worms and politicians are the last people we want making decisions about it (but unfortunately, they are the people who WILL be making decisions about it)
 
  • Like
Likes russ_watters
From the core principles section, you could interpret this to mean nothing more than the company has to have a POC for each algorithm.

The core principles of AI regulation proposed today will require developers and users to:
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability
 
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

View attachment 304739

On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?

As a philisophical question it might be tough. But it's not too much trouble to come up with a legal definition with a specific purpose.

A harder task is this:

Make sure that AI is appropriately transparent and explainable

Explaining the AI systems of interest is currently essentially impossible depending on your definition of explainable. Thus explainable would need to be defined in a very particular way as well and it could easily be that the AI systems of interest, and general articifial intelligence in general, will never meet the criteria.

As an example, suppose an AI system appears to deliberately crash an airplane. Then one has to try to figure out why it did that, which can be impossible. Did someone somehow cause it to happen on purpose? Did it make a mistake? What went into the decision and who is responsible? Because the decision is the result of billions of parameters in a non-linear mathematical model, which obtained their values through a chaotic learning process from exabytes of data, it is not easy to find someone to blame, because nobody could have ever predicted it would happen.
 
Last edited:
  • Like
Likes sandy stone
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

View attachment 304739

On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?

if an AI is treated like any machine, where the owner / operator is responsible for any damages, why does the definition matter? if Google’s AI really did achieve sentience (which I doubt, but there was that recent news story about a fired engineer who claimed this), is the company any more or less liable for anything it does?
 
  • Like
Likes Grinkle, atyy, Dale and 3 others
jedishrfu said:
Heres an example where AI regulation would help. A chess playing robot broke a childs finger. The child was too hasty in trying to make his next move and robot grabbed his finger.

https://www.techspot.com/news/95405-watch-chess-playing-robot-grabs-child-opponent-finger.html

why does more regulation help? the law would treat it like the child was injured playing with a toy or power tool - either the owner or manufacturer is liable depending on the context
 
  • Like
Likes Dale and russ_watters
If it is legal for a person to be racist, why should it be illegal for a machine?

Gaa. It makes no sense to anthropomorphize a machine, no matter what the details. Maybe I should blame Isaac Asimov for conditioning us to think of smart machines being like people. That's fun in a story, but bunk in real life.
 
  • Like
Likes phinds
  • #10
a gray legal issue exists if your car hits something while driving on autopilot - not sure why any manufacturer would want to assume the potential liability for the wrecks of their vehicles.

looks like Tesla found a novel solution:

On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.

https://abovethelaw.com/2022/06/whe...slas-artificial-intelligence-and-human-error/
 
  • Like
Likes russ_watters
  • #11
This, perhaps unsurprisingly, avoids an actual issue in favor of a clickbaity non-issue.

A human brain has 100 billion neurons. No supercomputer has even 10 million cores. More to the poiint, the human brain has a quadrillion synapses, and the number of interconnects peaks at the tens of thousands.

You might be able to get a flatworn's level of intelligence onto today's hardware, maybe even a jellyfish, but a single ant is out of the question.

You can, of course, simulate things - i can write a great AI to play chess, but I can't discuss "why did Bobby Fischer go nutso" with it afterwards. But we have been doing this for over 50 years. ELIZA s not really an unusually bad therapist.

The more immediate issue has to do with responsibility. When a self-driving car hits someone, who is responsible? The owner? The programmer? The company that hired the programmer? Does it matter if it hit one person to avoid hitting several? Does it matter if the number of highway deaths goes down despite a few well-publicized accidents?

But that's real work. If there's one thing legislators hate more than taking a stand on a real issue, it's doing actual work.
 
Last edited:
  • Like
Likes Bystander, anorlunda and Borg
  • #12
Vanadium 50 said:
A human brain has 100 billion neurons. No supercomputer has even 10 million cores.

The largest version of GPT-3 has 175 billion artificial neurons parameters and its training process involved ##3.64\times10^{23}## FLOPS.
 
Last edited:
  • #13
BWV said:
if an AI is treated like any machine, where the owner / operator is responsible for any damages, why does the definition matter? if Google’s AI really did achieve sentience (which I doubt, but there was that recent news story about a fired engineer who claimed this), is the company any more or less liable for anything it does?
I agree. I don't see a reason why the question matters until/unless a narrow/hard ai gets legal autonomy. We're a long way from that. Until then, it's just a machine with an owner.

Also, I don't think Elon's odds in court are going to be very good, using the public as guinea pigs for his r&d project (he calls it a beta test, but it's really not even that yet). He should be liable for his product's dangers.
 
  • Like
Likes Grinkle and Vanadium 50
  • #15
I think we could probably look at the AI liability issue in a similar way to liability for harm done by pharmaceuticals. In both cases we cannot completely determine the side effects, risks, complications, etc. analytically. Before a new drug is allowed to be released for use on the public, it has to be tested rigorously and approved, and the same should be true for AI systems with potential for large scale impact.
 
Last edited:
  • #16
Jarvis323 said:
Before a new drug is allowed to be released for use on the public, it has to be tested rigorously, and the same should be true for AI systems with potential for large scale impact.
That would be nice but who do you think is going to make that happen?
 
  • Like
Likes russ_watters
  • #17
phinds said:
That would be nice but who do you think is going to make that happen?
Maybe it can be automated?
 
  • Haha
Likes Grinkle
  • #18
Jarvis323 said:
The largest version of GPT-3 has 175 billion artificial neurons
That's trading space for time. I can have a CPU simulated N neurons, at a cost of 1/N in speed. N has to be at least 10.000. But that just makes the interconnect problem even tougher, because I am putting 10,000 as much traffic onto the interconnects. And as I said, that's more important.

Jarvis323 said:
training process involved 3 x 10^24 FLOPS.
At, say 500 petaflops - the fastest available - that's 70 days. Human beings learn in order seconds.
 
  • #19
While working on a data mining project for an insurance company, the issue always was racial bias and so any DM (datamining) models were replaced with SQL so that the company could demonstrate to regulators that there was no bias in how it sold its services.

The DM identified the group of customers, their attributes were identified and then SQL + attributes was crafted to approximate what the DM found.
 
  • #20
Vanadium 50 said:
That's trading space for time. I can have a CPU simulated N neurons, at a cost of 1/N in speed. N has to be at least 10.000. But that just makes the interconnect problem even tougher, because I am putting 10,000 as much traffic onto the interconnects. And as I said, that's more important.At, say 500 petaflops - the fastest available - that's 70 days. Human beings learn in order seconds.
I think the main issue here is the difference between training time and operating time. Training time for these models is huge. GPT-3 took about 355 years worth of Volta V100 GPU time to train. 364,000,000,000,000,000,000,000 is a lot of FLOPS. But that is a one time preprocessing cost, which maps to a little under 5 million dollars worth of compute time. Afterward, when it is in operation, it is much faster. It can run in real time without a large super computer.

EDIT: I mistakenly said GPT-3 has 175 billion neurons, actually, that is the number of parameters. And it is true that the real neuron is much more complex than an artificial one.
 
Last edited:
  • #21
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition.
I don't think that 'one size fits all' would do for the issue. The thing that two main examples tends to come up in such threads tells a tale.
- industrial side: autopilot and such things where one standardized product can be the basis of evaluation
- personal side: where interaction with individuals is the main point, and so evaluation should be based on actual usage/instance of an AI

GPT is an interesting example, but what would make me even more interested is the (hope it's coming soon) counterpart AI for GPT: the one which is able to filter gibberish for pearls and trash, rather than just creating more of it... I wonder how would that perform against social media bots - and politicians o0)
 
  • #22
How to compensate victims when technology fails? I think it should be built into the product price. Here are two examples. Let's visualize both as being provided by government, so that the tort system and punishment of evil corporations are removed from consideration.
  1. Automated vehicle fans claim that auto accident fatalities can be reduced from 50K to 10K per year. They are very far from proving that, but that is their aspiration. California says they need to triple capacity on freeways with no new roads. California would make it illegal to use human drivers on the freeways. Total deaths are reduced, but the remaining deaths that do occur are by definition caused flaws or bugs or weaknesses in the self driving vehicles. I propose a $50K surcharge on new vehicles to compensate the 10K victims. So if we sell 106 new cars per year, that creates a fund of $5x1010, or $500,000 per fatal victim.
  2. A pill taken yearly, cures and prevents all cancers, but 0.1% of people taking the pill die immediately. 300 million people take the pill (40 million say no), 600K cancer deaths per year are avoided, 1.8 million non-fatal cancers avoided, but 300K people are killed. The pills cost $10 each, but a surcharge of $500 per pill provides a fund to compensate each victim by $500K. Note that even a short delay for more testing and improvement will allow more people to die from cancer than could be saved from toxicity.
One can argue for larger or smaller per-victim amounts, or more testing and improvement of the products to reduce deaths, but those do nothing to change the moral or legal issues. Make up any numbers you want, the moral issues remain invariant.
 
  • #23
I've been looking over the case in the US. It looks like they've recently established an advisory committee, which has provided a document on guidance for AI regulation. It appears though that a primary concern is to ensure that regulation doesn't stifle innovation, and to make sure that the US leads the world in technological development.

The mission of the National AI Initiative is to ensure continued U.S. leadership in AI research and development, lead the world in the development and use of trustworthy AI in the public and private sectors, and prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society.

https://www.ai.gov/naiac/
https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf

The advisory board looks like a great team, but is largely made up by industry executives, and the recommendations are basically to avoid regulation which hampers innovation and growth.

Of course tech companies want to advocate their own interests, but in addition, the situation is framed within the context of a so called "AI arms race" between the US and China. The term can apply to more than just weapons of war, but also national security concerns, and economic competition. The economic interests of companies, and the goals of winning the race against China, compete with ethical concerns and risk management. Some think this is a recipe for disaster.
 
  • #25
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI.
Does it matter in this context? If a legal person (eg you or your company) makes an AI, whether it is a deep learning algorithm or a fly all governor, then that person is held legally responsible for any negligence in its design. In this sense an AI is simply held to the same standard as any other product
 
  • Like
Likes Grinkle, russ_watters and Vanadium 50
  • #26
Dale said:
Does it matter in this context?
I think it matters a lot. Let's say the government prosecutes me.

Prosecutor: "Your honor, defendant violated our AI regulations."
Defendant: "Prove that my product is an AI."

A narrow legal definition is rapidly outdated. A broad definition ropes in too many other things. The difficulty of proving the defendant has an AI is critically dependent on a very well written definition of AI. From what I've seen so far, they don't have one.
 
  • #27
anorlunda said:
Prosecutor: "Your honor, defendant violated our AI regulations."
Defendant: "Prove that my product is an AI."
As a legal strategy that is unlikely to be beneficial to a manufacturer of an AI that has caused harm to someone.

First, expert witness testimony could establish that it is an AI. This is exceptionally common.

Second, the liability issue with AI is that an AI potentially made its own decision and that the manufacturer should therefore not be liable for the AI’s decision. By arguing that the system is not an AI then that removes the “it made its own decision” argument anyway. It becomes simply a manufactured device doing what the manufacturer designed it to do. Thus the manufacturer has liability for any harm caused by its design.

So even if the defense wins this argument they still bear legal responsibility for the harm their system caused
 
  • Like
Likes russ_watters and phinds
  • #28
anorlunda said:
How to compensate victims when technology fails? I think it should be built into the product price. Here are two examples. Let's visualize both as being provided by government, so that the tort system and punishment of evil corporations are removed from consideration.
  1. Automated vehicle fans claim that auto accident fatalities can be reduced from 50K to 10K per year. They are very far from proving that, but that is their aspiration. California says they need to triple capacity on freeways with no new roads. California would make it illegal to use human drivers on the freeways. Total deaths are reduced, but the remaining deaths that do occur are by definition caused flaws or bugs or weaknesses in the self driving vehicles. I propose a $50K surcharge on new vehicles to compensate the 10K victims. So if we sell 106 new cars per year, that creates a fund of $5x1010, or $500,000 per fatal victim.
we already charge people money to compensate for the damage their cars cause. It's called insurance. If ai cars are safer, insurance companies can conclude that on their own and charge less, hence encouraging people to switch. Why are you making this more complicated than it already is?we literally need to do nothing.

anorlunda said:
  1. A pill taken yearly, cures and prevents all cancers, but 0.1% of people taking the pill die immediately. 300 million people take the pill (40 million say no), 600K cancer deaths per year are avoided, 1.8 million non-fatal cancers avoided, but 300K people are killed. The pills cost $10 each, but a surcharge of $500 per pill provides a fund to compensate each victim by $500K. Note that even a short delay for more testing and improvement will allow more people to die from cancer than could be saved from toxicity.

Let's take a step back here. Would it be better or worse for everyone if we all paid a yearly tax that compensated everyone who died of cancer 500,000 dollars? If the answer is worse, then you're stapling a huge anchor to what would otherwise be an amazing medical invention. If the answer is better, why are we waiting for this pill to start advocating for it?
 
  • Like
  • Informative
Likes jack action and Oldman too
  • #29
Dale said:
Does it matter in this context? If a legal person (eg you or your company) makes an AI, whether it is a deep learning algorithm or a fly all governor
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
 
  • Haha
  • Like
Likes Dale and Office_Shredder
  • #30
https://deepai.org/publication/a-legal-definition-of-ai
I'm sure this isn't the last word on the subject, however it is an impressive attempt at addressing the threads title.

The discussion section is very insightful, you can enjoy that at your leisure. Here, I'll cut to the chase and just post the recommendation from that discussion.

[Recommendation.
Policy makers should not use the term "artificial intelligence" for regulatory purposes. There is no definition of AI which meets the requirements for legal definitions. Instead, policy makers should adopt a risk-based approach: (1) they should decide which specific risk they want to address, (2) identify which property of the system is responsible for that risk and (3) precisely define that property. In other words, the starting point should be the underlying risk, not the term AI.]
 
  • Like
Likes Dale and Jarvis323
  • #31
Office_Shredder said:
If ai cars are safer, insurance companies can conclude that on their own and charge less, hence encouraging people to switch. Why are you making this more complicated than it already is?we literally need to do nothing
I agree. In my opinion the issue is that most software engineers and software engineering companies, unlike all other engineers, are not used to thinking about and prioritizing safety. All we need to do is hold them to the same product safety standards as every other engineer and engineering company.
 
  • Like
Likes Grinkle and Oldman too
  • #32
Oldman too said:
https://deepai.org/publication/a-legal-definition-of-ai

[Recommendation.
Policy makers should not use the term "artificial intelligence" for regulatory purposes. There is no definition of AI which meets the requirements for legal definitions. Instead, policy makers should adopt a risk-based approach: (1) they should decide which specific risk they want to address, (2) identify which property of the system is responsible for that risk and (3) precisely define that property. In other words, the starting point should be the underlying risk, not the term AI.]

I agree that the term AI is hard to pin down as a catch all for the systems we need to regulate. Furthermore, the national initiative is to accelerate the integration of AI into all private and public sectors. That is a broad set of cases each with different risks and contexts, and regulatory changes that pertain to a particular context need to be integrated into the existing regulatory frameworks . In some cases, there is no established regulatory framework since the AI technology is operating in new territory.

The current legal challenge, in my opinion, is that liability depends on knowledge and reason, yet deep learning is not explainable. This can make identifying what went wrong, and the persons responsible, difficult to impossible. In order to hold someone responsible, they would need to have either broken the law, or at least have access to a quantitative analysis of the risks. In many cases, quantitative analysis of the risks is not required. With a lack of requirement, there can be disincentive to even do a risk assessment, because knowing the risk makes you more liable. The solution seems obvious, require rigorous testing to determine the risks, as is already done in cases where risk can only be determined through testing.

The problems are that rigorous testing requirements can be argued to stifle innovation. AI systems can evolve constantly, even on the fly. Would an approval process need to be redone with each update to a deep learning system? Does it depend on context? For medical diagnosis? For a lawn mower? For a self driving car? For a search engine? For a digital assistant? For a weapon? For a stock market trading system?
 
Last edited:
  • #33
Vanadium 50 said:
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
It is interesting. People have this insanely inflated idea of what AI is capable of. We take an average intelligence teenage human, with average sensory and motor skills, and give them on the order of a hundred hours of experience and they can drive a car reasonably safely. We give the best AI, with the best sensors, and the best actuators and give them on the order of a million hours of experience and they struggle with that same task. The human brain is an amazing learning machine that we don’t yet understand and cannot yet artificially replicate or exceed.
 
Last edited:
  • Like
Likes russ_watters, BWV, Borg and 1 other person
  • #34
Vanadium 50 said:
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
What if it is legal for the AI to drive your car, and the AI is a statistically better/safer driver than a human being?
 
  • #35
Dale said:
It becomes simply a manufactured device doing what the manufacturer designed it to do. Thus the manufacturer has liability for any harm caused by its design.

It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
 
  • Wow
  • Skeptical
Likes russ_watters and phinds
  • #36
Jarvis323 said:
It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
You should watch "Airplane Disasters" on TV. The number of plane accidents that remain unexplained after the investigators finish is somewhere between few and none. The is no reason why the same could not be applied to AI things, so I seriously doubt your contention that an accident is "not explainable".
 
  • #37
phinds said:
You should watch "Airplane Disasters" on TV. The number of plane accidents that remain unexplained after the investigators finish is somewhere between few and none. The is no reason why the same could not be applied to AI things, so I seriously doubt your contention that an accident is "not explainable".

Deep learning models can have billions of interdependent parameters. Those parameters are set by the back-propagation algorithm as it learns from data. The intractability of understanding a multi-billion parameter non-linear model aside, the back-propegation process which parameterizes it is a proven chaotic process. Say such a system makes a decision that results in an accident. How do you propose to try and explain it? What kind of human choices do you suppose can be made accountable? Would you subpoena all of the training data? Then would you comb through the data, and compare it with the multi-billion parameter model and come to a conclusion that something in the data was the problem, and someone should have known that the emergent model would have made a particular decision under a particular set of conditions after the training process?
 
Last edited:
  • #38
Jarvis323 said:
What if it is legal for the AI to drive your car, and the AI is a statistically better/safer driver than a human being?
We'll burn that bridge when we come to it.
 
  • Haha
  • Like
Likes Dale and Borg
  • #39
The article linked in #1 about the EU proposal says, "Make sure that AI is appropriately transparent and explainable"

I think the requirement is there because of potential race bias, not because of accidents.

White person A is approved for credit, but black person B is denied. Why?
A test on 10000 people shows that whites are approved more often. Transparent and explainable are needed to prove absence of racism. (Think of the bank red lining scandals.)

It is also my understanding that no neural net is explainable in that sense. That makes an explainability requirement a very big deal. Scrubbing training data to exclude racial bias still would not provide explanations of the results.

Consider the facial authorization application. A nxn pixel image is input, it is compared to a nxn reference image. There are two outputs, 1) Yes the two images are of the same person, or 2) No, not the same person. How could it be modified to provide an explanation of the result? Why no for person X? Why are there different success rates for different races or genders or ages? Neural networks are unable to answer those why questions.
 
  • Like
Likes Jarvis323
  • #40
anorlunda said:
The article linked in #1 about the EU proposal says, "Make sure that AI is appropriately transparent and explainable"

I think the requirement is there because of potential race bias, not because of accidents.

White person A is approved for credit, but black person B is denied. Why?
A test on 10000 people shows that whites are approved more often. Transparent and explainable are needed to prove absence of racism. (Think of the bank red lining scandals.)

It is also my understanding that no neural net is explainable in that sense. That makes an explainability requirement a very big deal. Scrubbing training data to exclude racial bias still would not provide explanations of the results.

Consider the facial authorization application. A nxn pixel image is input, it is compared to a nxn reference image. There are two outputs, 1) Yes the two images are of the same person, or 2) No, not the same person. How could it be modified to provide an explanation of the result? Why no for person X? Why are there different success rates for different races or genders or ages? Neural networks are unable to answer those why questions.

I agree this is a big issue. I don't think explainability applies only to this kind of case though. AI systems are going to control our systems in every sector. In all cases where something could go wrong, too many to list, explainability matters.
 
Last edited:
  • #41
Jarvis323 said:
It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event.
In traditional products it is generally not sufficient for a company to not know that a product is dangerous. They need to know that it is safe.

Jarvis323 said:
The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to?
There are not just regulations, there are laws. Product safety issues are not just regulatory issues, they are also criminal negligence and liability issues. That the risks were unknown is generally taken to mean that the company’s product safety testing protocols were insufficient. In consumer safety ignorance is not typically a sound defense.

Although there are criminal statutes, by far the bigger legal threat to traditional engineering companies is civil litigation. Such litigation depends much more on the the harm received than on any knowledge by the company of the risk.

Jarvis323 said:
Or you can chalk it up to bad luck.
Humans generally prefer to fix blame, including humans serving on juries.
 
  • #42
Jarvis323 said:
AI systems can evolve constantly, even on the fly.
I honestly, myself, believe your point there is one of the biggest factors to consider, especially in the long term. Rather than a "fix" for the problem, there will need to be some form of constant monitoring and "upgrading" of the fix. By that, I'm thinking that regulating AI, especially the legal aspects will have to be a dynamic, always adapting to new situations process. (It's unfortunate that the best candidate for that job is AI, how's that for irony?) After chasing through the links cited in the openai piece, its obvious this is a can of worms without a gordian knot solution. As an afterthought, it does guarantee this thread a long life though.
Jarvis323 said:
Would an approval process need to be redone with each update to a deep learning system? Does it depend on context? For medical diagnosis? For a lawn mower? For a self driving car? For a search engine? For a digital assistant? For a weapon? For a stock market trading system?
Very good questions, this is another facet of the can of worms issue. I'm more qualified as a spectator in this thread than anything so I'm better off following the conversation than giving serious answers.
 
  • #43
Jarvis323 said:
Would an approval process need to be redone with each update to a deep learning system?
I hadn’t seen this comment before.

Just so that you can understand my perspective, let me explain my relevant background. I currently work for a global medical device manufacturer, one that has one of the largest number of patents in applying AI technologies to medical products. I am not currently in one of our R&D groups, but I was for 14 years and I worked on more than one AI-related project.

Indeed, the regulatory process needs to be redone with each update to any of our AI systems. Our AI technologies are regulated the same as all of our other technologies. As a result our development is slow, incremental, careful, and thoroughly tested, as it should be, for AI just like for any other technology we incorporate into our medical devices.
 
  • Like
  • Informative
Likes Grinkle, Oldman too and Jarvis323
  • #44
Dale said:
I hadn’t seen this comment before.

Just so that you can understand my perspective, let me explain my relevant background. I currently work for a global medical device manufacturer, one that has one of the largest number of patents in applying AI technologies to medical products. I am not currently in one of our R&D groups, but I was for 14 years and I worked on more than one AI-related project.

Indeed, the regulatory process needs to be redone with each update to any of our AI systems. Our AI technologies are regulated the same as all of our other technologies. As a result our development is slow, incremental, careful, and thoroughly tested, as it should be, for AI just like for any other technology we incorporate into our medical devices.
I think that in the case of medical devices, and also self driving cars, the issue is somewhat straightforward because the risks are taken seriously, and there are existing regulatory frameworks which can be mostly relied on already. But other sectors can be much fuzzier.
 
  • #45
Jarvis323 said:
It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
Yes that's how liability works/no it can't be chalked-up to bad luck. And yes, the flaw in the design is obvious: the AI made a bad decision therefore the AI is faulty. That's the entire point of AI: it makes its own decisions.

I agree with Dale here, which is why I think this entire discussion is much ado about nothing. At some point maybe AI will be granted personhood. Until then there is no implication for product liability law in AI.
 
  • Like
Likes Dale
  • #46
anorlunda said:
think the requirement is there because of potential race bias
That's kind of the point.

Say you are a mortgage lender. You can't legally use race in the decision process - but from a purely dollar basis, you'd like to. Race and default rates are correlated, but one can't legally use this correlation. So, if you a zillion dollar bank, what do you do?

You build a model where the only inputs are perfectly legal, and you don't try to infer race directly - but if the model comes up with an output that has a race-based correlation, well, what's a banker to do? He has plausible deniablity.

In our privacy-free world, getting this information is easier than it should be: phone records, groceries, magazine subscriptions, other purchases...
 
  • Informative
Likes Oldman too
  • #47
russ_watters said:
Yes that's how liability works/no it can't be chalked-up to bad luck. And yes, the flaw in the design is obvious: the AI made a bad decision therefore the AI is faulty. That's the entire point of AI: it makes its own decisions.

I agree with Dale here, which is why I think this entire discussion is much ado about nothing. At some point maybe AI will be granted personhood. Until then there is no implication for product liability law in AI.

At the least, AI being unexplainable makes product liability law very messy and complex. Because, people will try to shift blame, and to get to the bottom of who is to blame, people will try to determine what went wrong. Then you have the consequence that such cases could be very costly and drawn out. And that can strain the legal system, as well as make for situations where one side is forced to forfeit because they can't afford to proceed.
 
  • #48
Vanadium 50 said:
That's kind of the point.

Say you are a mortgage lender. You can't legally use race in the decision process - but from a purely dollar basis, you'd like to.
What if the mortgage lender doesn't want to use race in the decision process but the AI decides it's a good metric anyway?
 
  • #49
Jarvis323 said:
At the least, AI being unexplainable makes product liability law very messy and complex. Because, people will try to shift blame, and to get to the bottom of who is to blame, people will try to determine what went wrong.
I don't think that's true. You don't have to unravel every bit of the decision making process in order to judge whether the decision was faulty. Why the AI made its wrong decision doesn't matter. Liability doesn't hinge on whether the decision was accidental or on purpose, it just depends on whether the decision was bad/good.
 
  • #50
  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability

Above are the principles that they are guiding their regulation proposal.

https://www.gov.uk/government/news/...tion-and-boost-public-trust-in-the-technology

The first 2 principles require testing/approval/certification. Sure this is already done in some cases where a self learning AI replaces a person or operates a device which used to be operated by an explicit algorithm. But not all.

The 3rd is arguably impossible in an absolute sense, but can be strived for. Testing requirements can help here as well.

The 4th, fairness, we have discussed. It can also be helped by testing.

The 5th and 6th (identify legal person responsible and clarify routes to redress contestability) are crucial if you want to avoid messy and costly legal battles.
 
Back
Top