Regulating AI: What's the Definition?

  • Thread starter anorlunda
  • Start date
  • Tags
    Ai
  • Featured
In summary, the government has proposed new regulations for artificial intelligence that would require developers and users to identify a legal person responsible for any problems caused by AI. The proposed regime will be operated by existing regulators rather than a dedicated central body. These proposals were published as the Data Protection and Digital Information Bill is introduced to parliament. There is debate about the definition of AI and its transparency and explainability, as well as who should be held responsible for AI-related incidents. Examples, such as a chess playing robot breaking a child's finger, highlight the need for regulation. There are also concerns about liability in cases where AI is involved in accidents. However, some argue that anthropomorphizing AI is not helpful and that manufacturers should not be held liable for
  • #1
anorlunda
Staff Emeritus
Insights Author
11,308
8,732
https://www.lawgazette.co.uk/law/ar...es-to-require-human-liability/5113150.article
Artificial intelligence systems will have to identify a legal person to be held responsible for any problems under proposals for regulating AI unveiled by the government today.

The proposed 'pro innovation' regime will be operated by existing regulators rather than a dedicated central body along the lines of that being created by the EU, the government said.

The proposals were published as the Data Protection and Digital Information Bill, which sets out an independent data protection regime, is introduced to parliament. The measure will be debated after the summer recess.

It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

1658674012040.png


On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?
 
  • Informative
  • Like
Likes atyy, russ_watters and Greg Bernhardt
Computer science news on Phys.org
  • #2
Yeah, that whole thing is a bag of worms and politicians are the last people we want making decisions about it (but unfortunately, they are the people who WILL be making decisions about it)
 
  • Like
Likes russ_watters
  • #3
From the core principles section, you could interpret this to mean nothing more than the company has to have a POC for each algorithm.

The core principles of AI regulation proposed today will require developers and users to:
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability
 
  • #4
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

View attachment 304739

On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?

As a philisophical question it might be tough. But it's not too much trouble to come up with a legal definition with a specific purpose.

A harder task is this:

Make sure that AI is appropriately transparent and explainable

Explaining the AI systems of interest is currently essentially impossible depending on your definition of explainable. Thus explainable would need to be defined in a very particular way as well and it could easily be that the AI systems of interest, and general articifial intelligence in general, will never meet the criteria.

As an example, suppose an AI system appears to deliberately crash an airplane. Then one has to try to figure out why it did that, which can be impossible. Did someone somehow cause it to happen on purpose? Did it make a mistake? What went into the decision and who is responsible? Because the decision is the result of billions of parameters in a non-linear mathematical model, which obtained their values through a chaotic learning process from exabytes of data, it is not easy to find someone to blame, because nobody could have ever predicted it would happen.
 
Last edited:
  • Like
Likes sandy stone
  • #6
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI. It figured out by itself how to move the throttle, and it displaced human workers who could have done the same thing manually.

View attachment 304739

On an abstract level, if we have a black box with which we communicate, what is the test we can use to prove that the content of the box is an AI?

if an AI is treated like any machine, where the owner / operator is responsible for any damages, why does the definition matter? if Google’s AI really did achieve sentience (which I doubt, but there was that recent news story about a fired engineer who claimed this), is the company any more or less liable for anything it does?
 
  • Like
Likes Grinkle, atyy, Dale and 3 others
  • #7
jedishrfu said:
Heres an example where AI regulation would help. A chess playing robot broke a childs finger. The child was too hasty in trying to make his next move and robot grabbed his finger.

https://www.techspot.com/news/95405-watch-chess-playing-robot-grabs-child-opponent-finger.html

why does more regulation help? the law would treat it like the child was injured playing with a toy or power tool - either the owner or manufacturer is liable depending on the context
 
  • Like
Likes Dale and russ_watters
  • #8
If it is legal for a person to be racist, why should it be illegal for a machine?

Gaa. It makes no sense to anthropomorphize a machine, no matter what the details. Maybe I should blame Isaac Asimov for conditioning us to think of smart machines being like people. That's fun in a story, but bunk in real life.
 
  • Like
Likes phinds
  • #10
a gray legal issue exists if your car hits something while driving on autopilot - not sure why any manufacturer would want to assume the potential liability for the wrecks of their vehicles.

looks like Tesla found a novel solution:

On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.

https://abovethelaw.com/2022/06/whe...slas-artificial-intelligence-and-human-error/
 
  • Like
Likes russ_watters
  • #11
This, perhaps unsurprisingly, avoids an actual issue in favor of a clickbaity non-issue.

A human brain has 100 billion neurons. No supercomputer has even 10 million cores. More to the poiint, the human brain has a quadrillion synapses, and the number of interconnects peaks at the tens of thousands.

You might be able to get a flatworn's level of intelligence onto today's hardware, maybe even a jellyfish, but a single ant is out of the question.

You can, of course, simulate things - i can write a great AI to play chess, but I can't discuss "why did Bobby Fischer go nutso" with it afterwards. But we have been doing this for over 50 years. ELIZA s not really an unusually bad therapist.

The more immediate issue has to do with responsibility. When a self-driving car hits someone, who is responsible? The owner? The programmer? The company that hired the programmer? Does it matter if it hit one person to avoid hitting several? Does it matter if the number of highway deaths goes down despite a few well-publicized accidents?

But that's real work. If there's one thing legislators hate more than taking a stand on a real issue, it's doing actual work.
 
Last edited:
  • Like
Likes Bystander, anorlunda and Borg
  • #12
Vanadium 50 said:
A human brain has 100 billion neurons. No supercomputer has even 10 million cores.

The largest version of GPT-3 has 175 billion artificial neurons parameters and its training process involved ##3.64\times10^{23}## FLOPS.
 
Last edited:
  • #13
BWV said:
if an AI is treated like any machine, where the owner / operator is responsible for any damages, why does the definition matter? if Google’s AI really did achieve sentience (which I doubt, but there was that recent news story about a fired engineer who claimed this), is the company any more or less liable for anything it does?
I agree. I don't see a reason why the question matters until/unless a narrow/hard ai gets legal autonomy. We're a long way from that. Until then, it's just a machine with an owner.

Also, I don't think Elon's odds in court are going to be very good, using the public as guinea pigs for his r&d project (he calls it a beta test, but it's really not even that yet). He should be liable for his product's dangers.
 
  • Like
Likes Grinkle and Vanadium 50
  • #15
I think we could probably look at the AI liability issue in a similar way to liability for harm done by pharmaceuticals. In both cases we cannot completely determine the side effects, risks, complications, etc. analytically. Before a new drug is allowed to be released for use on the public, it has to be tested rigorously and approved, and the same should be true for AI systems with potential for large scale impact.
 
Last edited:
  • #16
Jarvis323 said:
Before a new drug is allowed to be released for use on the public, it has to be tested rigorously, and the same should be true for AI systems with potential for large scale impact.
That would be nice but who do you think is going to make that happen?
 
  • Like
Likes russ_watters
  • #17
phinds said:
That would be nice but who do you think is going to make that happen?
Maybe it can be automated?
 
  • Haha
Likes Grinkle
  • #18
Jarvis323 said:
The largest version of GPT-3 has 175 billion artificial neurons
That's trading space for time. I can have a CPU simulated N neurons, at a cost of 1/N in speed. N has to be at least 10.000. But that just makes the interconnect problem even tougher, because I am putting 10,000 as much traffic onto the interconnects. And as I said, that's more important.

Jarvis323 said:
training process involved 3 x 10^24 FLOPS.
At, say 500 petaflops - the fastest available - that's 70 days. Human beings learn in order seconds.
 
  • #19
While working on a data mining project for an insurance company, the issue always was racial bias and so any DM (datamining) models were replaced with SQL so that the company could demonstrate to regulators that there was no bias in how it sold its services.

The DM identified the group of customers, their attributes were identified and then SQL + attributes was crafted to approximate what the DM found.
 
  • #20
Vanadium 50 said:
That's trading space for time. I can have a CPU simulated N neurons, at a cost of 1/N in speed. N has to be at least 10.000. But that just makes the interconnect problem even tougher, because I am putting 10,000 as much traffic onto the interconnects. And as I said, that's more important.At, say 500 petaflops - the fastest available - that's 70 days. Human beings learn in order seconds.
I think the main issue here is the difference between training time and operating time. Training time for these models is huge. GPT-3 took about 355 years worth of Volta V100 GPU time to train. 364,000,000,000,000,000,000,000 is a lot of FLOPS. But that is a one time preprocessing cost, which maps to a little under 5 million dollars worth of compute time. Afterward, when it is in operation, it is much faster. It can run in real time without a large super computer.

EDIT: I mistakenly said GPT-3 has 175 billion neurons, actually, that is the number of parameters. And it is true that the real neuron is much more complex than an artificial one.
 
Last edited:
  • #21
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition.
I don't think that 'one size fits all' would do for the issue. The thing that two main examples tends to come up in such threads tells a tale.
- industrial side: autopilot and such things where one standardized product can be the basis of evaluation
- personal side: where interaction with individuals is the main point, and so evaluation should be based on actual usage/instance of an AI

GPT is an interesting example, but what would make me even more interested is the (hope it's coming soon) counterpart AI for GPT: the one which is able to filter gibberish for pearls and trash, rather than just creating more of it... I wonder how would that perform against social media bots - and politicians o0)
 
  • #22
How to compensate victims when technology fails? I think it should be built into the product price. Here are two examples. Let's visualize both as being provided by government, so that the tort system and punishment of evil corporations are removed from consideration.
  1. Automated vehicle fans claim that auto accident fatalities can be reduced from 50K to 10K per year. They are very far from proving that, but that is their aspiration. California says they need to triple capacity on freeways with no new roads. California would make it illegal to use human drivers on the freeways. Total deaths are reduced, but the remaining deaths that do occur are by definition caused flaws or bugs or weaknesses in the self driving vehicles. I propose a $50K surcharge on new vehicles to compensate the 10K victims. So if we sell 106 new cars per year, that creates a fund of $5x1010, or $500,000 per fatal victim.
  2. A pill taken yearly, cures and prevents all cancers, but 0.1% of people taking the pill die immediately. 300 million people take the pill (40 million say no), 600K cancer deaths per year are avoided, 1.8 million non-fatal cancers avoided, but 300K people are killed. The pills cost $10 each, but a surcharge of $500 per pill provides a fund to compensate each victim by $500K. Note that even a short delay for more testing and improvement will allow more people to die from cancer than could be saved from toxicity.
One can argue for larger or smaller per-victim amounts, or more testing and improvement of the products to reduce deaths, but those do nothing to change the moral or legal issues. Make up any numbers you want, the moral issues remain invariant.
 
  • #23
I've been looking over the case in the US. It looks like they've recently established an advisory committee, which has provided a document on guidance for AI regulation. It appears though that a primary concern is to ensure that regulation doesn't stifle innovation, and to make sure that the US leads the world in technological development.

The mission of the National AI Initiative is to ensure continued U.S. leadership in AI research and development, lead the world in the development and use of trustworthy AI in the public and private sectors, and prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society.

https://www.ai.gov/naiac/
https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf

The advisory board looks like a great team, but is largely made up by industry executives, and the recommendations are basically to avoid regulation which hampers innovation and growth.

Of course tech companies want to advocate their own interests, but in addition, the situation is framed within the context of a so called "AI arms race" between the US and China. The term can apply to more than just weapons of war, but also national security concerns, and economic competition. The economic interests of companies, and the goals of winning the race against China, compete with ethical concerns and risk management. Some think this is a recipe for disaster.
 
  • #25
anorlunda said:
It seems to me that one needs a highly specific definition of what AI is before enforcing such a law. I prefer a very broad definition. I would include James Watt's flyball governor from 1788 as an AI.
Does it matter in this context? If a legal person (eg you or your company) makes an AI, whether it is a deep learning algorithm or a fly all governor, then that person is held legally responsible for any negligence in its design. In this sense an AI is simply held to the same standard as any other product
 
  • Like
Likes Grinkle, russ_watters and Vanadium 50
  • #26
Dale said:
Does it matter in this context?
I think it matters a lot. Let's say the government prosecutes me.

Prosecutor: "Your honor, defendant violated our AI regulations."
Defendant: "Prove that my product is an AI."

A narrow legal definition is rapidly outdated. A broad definition ropes in too many other things. The difficulty of proving the defendant has an AI is critically dependent on a very well written definition of AI. From what I've seen so far, they don't have one.
 
  • #27
anorlunda said:
Prosecutor: "Your honor, defendant violated our AI regulations."
Defendant: "Prove that my product is an AI."
As a legal strategy that is unlikely to be beneficial to a manufacturer of an AI that has caused harm to someone.

First, expert witness testimony could establish that it is an AI. This is exceptionally common.

Second, the liability issue with AI is that an AI potentially made its own decision and that the manufacturer should therefore not be liable for the AI’s decision. By arguing that the system is not an AI then that removes the “it made its own decision” argument anyway. It becomes simply a manufactured device doing what the manufacturer designed it to do. Thus the manufacturer has liability for any harm caused by its design.

So even if the defense wins this argument they still bear legal responsibility for the harm their system caused
 
  • Like
Likes russ_watters and phinds
  • #28
anorlunda said:
How to compensate victims when technology fails? I think it should be built into the product price. Here are two examples. Let's visualize both as being provided by government, so that the tort system and punishment of evil corporations are removed from consideration.
  1. Automated vehicle fans claim that auto accident fatalities can be reduced from 50K to 10K per year. They are very far from proving that, but that is their aspiration. California says they need to triple capacity on freeways with no new roads. California would make it illegal to use human drivers on the freeways. Total deaths are reduced, but the remaining deaths that do occur are by definition caused flaws or bugs or weaknesses in the self driving vehicles. I propose a $50K surcharge on new vehicles to compensate the 10K victims. So if we sell 106 new cars per year, that creates a fund of $5x1010, or $500,000 per fatal victim.
we already charge people money to compensate for the damage their cars cause. It's called insurance. If ai cars are safer, insurance companies can conclude that on their own and charge less, hence encouraging people to switch. Why are you making this more complicated than it already is?we literally need to do nothing.

anorlunda said:
  1. A pill taken yearly, cures and prevents all cancers, but 0.1% of people taking the pill die immediately. 300 million people take the pill (40 million say no), 600K cancer deaths per year are avoided, 1.8 million non-fatal cancers avoided, but 300K people are killed. The pills cost $10 each, but a surcharge of $500 per pill provides a fund to compensate each victim by $500K. Note that even a short delay for more testing and improvement will allow more people to die from cancer than could be saved from toxicity.

Let's take a step back here. Would it be better or worse for everyone if we all paid a yearly tax that compensated everyone who died of cancer 500,000 dollars? If the answer is worse, then you're stapling a huge anchor to what would otherwise be an amazing medical invention. If the answer is better, why are we waiting for this pill to start advocating for it?
 
  • Like
  • Informative
Likes jack action and Oldman too
  • #29
Dale said:
Does it matter in this context? If a legal person (eg you or your company) makes an AI, whether it is a deep learning algorithm or a fly all governor
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
 
  • Haha
  • Like
Likes Dale and Office_Shredder
  • #30
https://deepai.org/publication/a-legal-definition-of-ai
I'm sure this isn't the last word on the subject, however it is an impressive attempt at addressing the threads title.

The discussion section is very insightful, you can enjoy that at your leisure. Here, I'll cut to the chase and just post the recommendation from that discussion.

[Recommendation.
Policy makers should not use the term "artificial intelligence" for regulatory purposes. There is no definition of AI which meets the requirements for legal definitions. Instead, policy makers should adopt a risk-based approach: (1) they should decide which specific risk they want to address, (2) identify which property of the system is responsible for that risk and (3) precisely define that property. In other words, the starting point should be the underlying risk, not the term AI.]
 
  • Like
Likes Dale and Jarvis323
  • #31
Office_Shredder said:
If ai cars are safer, insurance companies can conclude that on their own and charge less, hence encouraging people to switch. Why are you making this more complicated than it already is?we literally need to do nothing
I agree. In my opinion the issue is that most software engineers and software engineering companies, unlike all other engineers, are not used to thinking about and prioritizing safety. All we need to do is hold them to the same product safety standards as every other engineer and engineering company.
 
  • Like
Likes Grinkle and Oldman too
  • #32
Oldman too said:
https://deepai.org/publication/a-legal-definition-of-ai

[Recommendation.
Policy makers should not use the term "artificial intelligence" for regulatory purposes. There is no definition of AI which meets the requirements for legal definitions. Instead, policy makers should adopt a risk-based approach: (1) they should decide which specific risk they want to address, (2) identify which property of the system is responsible for that risk and (3) precisely define that property. In other words, the starting point should be the underlying risk, not the term AI.]

I agree that the term AI is hard to pin down as a catch all for the systems we need to regulate. Furthermore, the national initiative is to accelerate the integration of AI into all private and public sectors. That is a broad set of cases each with different risks and contexts, and regulatory changes that pertain to a particular context need to be integrated into the existing regulatory frameworks . In some cases, there is no established regulatory framework since the AI technology is operating in new territory.

The current legal challenge, in my opinion, is that liability depends on knowledge and reason, yet deep learning is not explainable. This can make identifying what went wrong, and the persons responsible, difficult to impossible. In order to hold someone responsible, they would need to have either broken the law, or at least have access to a quantitative analysis of the risks. In many cases, quantitative analysis of the risks is not required. With a lack of requirement, there can be disincentive to even do a risk assessment, because knowing the risk makes you more liable. The solution seems obvious, require rigorous testing to determine the risks, as is already done in cases where risk can only be determined through testing.

The problems are that rigorous testing requirements can be argued to stifle innovation. AI systems can evolve constantly, even on the fly. Would an approval process need to be redone with each update to a deep learning system? Does it depend on context? For medical diagnosis? For a lawn mower? For a self driving car? For a search engine? For a digital assistant? For a weapon? For a stock market trading system?
 
Last edited:
  • #33
Vanadium 50 said:
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
It is interesting. People have this insanely inflated idea of what AI is capable of. We take an average intelligence teenage human, with average sensory and motor skills, and give them on the order of a hundred hours of experience and they can drive a car reasonably safely. We give the best AI, with the best sensors, and the best actuators and give them on the order of a million hours of experience and they struggle with that same task. The human brain is an amazing learning machine that we don’t yet understand and cannot yet artificially replicate or exceed.
 
Last edited:
  • Like
Likes russ_watters, BWV, Borg and 1 other person
  • #34
Vanadium 50 said:
Exactly. "I let my five-year-old* drive my car" - how does that differ from "I let an AI, who is less smart than a five-year old."

* Or a dog, or a flatworm
What if it is legal for the AI to drive your car, and the AI is a statistically better/safer driver than a human being?
 
  • #35
Dale said:
It becomes simply a manufactured device doing what the manufacturer designed it to do. Thus the manufacturer has liability for any harm caused by its design.

It doesn't really matter if we classify the decision as an AI decision, what matters is that the event is not explainable. You can't point to any flaw in the design that resulted in the event, and you can't find anybody who made any decision which can be blamed for the event. The best you could do is blame the company for not knowing what the risks were and for deploying a product with unknown risks, but are there regulations requiring them to? Or you can chalk it up to bad luck.
 
  • Wow
  • Skeptical
Likes russ_watters and phinds

Similar threads

Replies
10
Views
2K
Replies
1
Views
1K
Replies
7
Views
5K
  • Sci-Fi Writing and World Building
Replies
7
Views
2K
  • Science Fiction and Fantasy Media
Replies
2
Views
3K
Replies
8
Views
4K
  • Biology and Medical
Replies
2
Views
3K
  • General Discussion
Replies
29
Views
9K
  • General Discussion
2
Replies
38
Views
5K
  • General Discussion
2
Replies
40
Views
9K
Back
Top