russ_watters said:
I find it very hard to extract a point from your posts. They are very verbose, with a lot of discussion and not much argument. The problem and thesis should each be able to be expressed in single sentences.
But now I'm gathering that my prior attempt at understanding your position was wrong. It now seems like you are saying that above a certain level of complexity (and autonomy/freewill) it becomes impossible to explain or test the "thought process" of AI. And therefore no liability can be traced back to the vendor. I agree with the first part, but think it is irrelevant to the second.
I'm back to agreeing with
@Dale and find it weird to suggest that something that is already happening isn't going to work. And looking forward, even if there is a complaint about the complexity or application (neither of which I'm clear on from what Dale is saying about his company), I don't think it matters. Right up to the point where legal personhood for AI gets put on the table, nothing changes in terms of liability/responsibility. It's either the manufacturer or the user, until its the AI. "Nobody because we don't know why the AI made that decision so we can't say it's the fault of the manufacturer" isn't an option. It always is the responsibility of the manufacturer. It has to be. And "I made an algorithm I can't explain/test" doesn't detach the liability -- It's pretty much an admission of guilt!
There are two regulatory approaches. The first is you require testing to try to prevent harm before it happens. The second is you don't require testing, but once harm is done, the party that is harmed can sue and then hopefully the company at fault and other related companies adjust their safety assurance policy after having new information to inform their cost benefit analysis. The first perhaps includes the second as well.
For a car accident, which can cause significant material damage, injury, or death, it is a pretty clear case. Those are obvious risks we take seriously, and inevitably there will be safety regulations and testing requirements pertaining to the specifics of designs, including the AI operation, just as a there are with crash testing, airbags, etc. Due to the complexity, accidents will still happen, and people can take the company to court if they have a case. The complexity of the interactions between persons and the AI might complicate that, because blame can be shifted away from the AI and onto another driver, bad luck (e.g. deer in the road), maybe on the condition of the vehicle depending on how it was maintained, etc. Anticipating these complications, imposing testing to prevent them, and deciding in advance what the rules are in specific cases of failure makes a lot of sense. This isn't really fundamentally new regulation, it is just additional regulation pertaining specifically to new things. How we regulate different things already depends on what those things are.
In other cases it is new territory. For example, what if your digital assistant recommends you drink a bottle of bleach to prevent Covid-19? What if your digital assistant calls you a racial slur and says you deserve to die? Or what if it is advocating genocide like Bing had been essentially doing for years? It is legal for a person to espouse Neo-Nazi ideology (in some countries) as we have free speech. But how about a digital assistant that millions or billions of people use? Is there a law against that and should there be? Do we wait until people get around to suing the company to prevent it? Do they have a case? Why wasn't Microsoft sued for Bing's actions and why did those problems persist so long? Did Bing help create a new generation of Neo-Nazi's? How can you tell? Does it matter legally?
And if liability were so straightforward with digital products, then why do almost all of us have to agree to lengthy terms and conditions which we don't read. How many of you that have IPads have read the 500+ page terms and conditions you agreed to?
7.2 YOU EXPRESSLY ACKNOWLEDGE AND AGREE THAT, TO THE EXTENT PERMITTED BY
APPLICABLE LAW, USE OF THE APPLE SOFTWARE AND ANY SERVICES PERFORMED BY OR
ACCESSED THROUGH THE APPLE SOFTWARE IS AT YOUR SOLE RISK AND THAT THE ENTIRE RISK
AS TO SATISFACTORY QUALITY, PERFORMANCE, ACCURACY AND EFFORT IS WITH YOU.
7.3 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE APPLE SOFTWARE AND
SERVICES ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT
WARRANTY OF ANY KIND ...
https://www.apple.com/legal/sla/docs/iOS15_iPadOS15.pdf
The truth is that this is messy, impacts are high and lasting, and timeliness matters. Waiting for multi-billion dollar companies to be defeated in court over serious and anticipated problems we haven't encountered yet in human civilization can be wasteful and damaging. The problems with Bing could have been fixed in 5 minutes if a simple test was done, or if there were simple regulations requiring them to prevent the problem in the first place. But it lasted years.
In some speculatively plausible cases, as time moves forward, risks could even involve human extinction. Good luck suing Elon Musk if he creates a gray goo meant to terraform Mars that malfunctions and consumes our biosphere.
Then you have even fuzzier cases, like social media algorithms causing widespread depression, and causing spikes in suicide rates. Is anyone to blame? If we know the algorithm is causing it, should we require it be fixed, or is there any liability? Suppose the AI has learned that depression correlates with impulsive purchasing and so it seeks to cause depression on purpose?
How about in the even fuzzier case where social media algorithms act as echo chambers, feed you things that press your buttons, and overall bring out the worst in people to drive engagement. These issues will have major impacts on society and our future but what can we do about it? Maybe we could try building some kind of regulatory framework for mitigating psychological and sociological impacts, but at that point it looks like a path from a scifi story that we've been warned of.
Anyways, my concerns with AI and regulation are mostly about the power of information/disinformation and how it is wielded with AI. These are serious and hard problems. There must be some way to prevent some of the huge negative impacts we can already anticipate.