Who is responsible for the software when AI takes over programming?

  • Thread starter Thread starter symbolipoint
  • Start date Start date
Click For Summary
The discussion centers on the accountability for software created by AI, questioning who is responsible for issues like bugs and malfunctions. It emphasizes that companies remain liable for products that incorporate AI, similar to traditional software development. Users do not need technical expertise to engage with consumer products, shifting the responsibility to the manufacturers. The conversation also touches on open-source software, where responsibility is decentralized, allowing users to choose versions and improvements. Ultimately, the accountability for AI-generated software still lies with the companies that produce and market it.
  • #31
harborsparrow said:
You can't tell me that human eyes are going to inspect all this AI-generated code
With the so called 'increased effectiveness' of AI-supported coding the humans assigned will need to watch over lot more and lot less familiar code than ever before.
 
  • Agree
Likes harborsparrow and DaveC426913
Technology news on Phys.org
  • #32
Rive said:
With the so called 'increased effectiveness' of AI-supported coding the humans assigned will need to watch over lot more and lot less familiar code than ever before.
If you've ever seen code from MATLAB's Simulink autocode generation, you know that it is not the kind of code you would want to review. For one thing, all the variables names are automatically generated and obscure.
That being said, with diagram standards that avoid problem areas, massive programs can be generated that are very reliable -- far better than human programmers could do.
 
  • Wow
Likes symbolipoint
  • #33
FactChecker said:
MATLAB's Simulink autocode generation
Nice, but most programmers I know (and are blessed with AI-supportive management) deals with lot less structured/organized/proper code (that lot less was one very polite description of that usual overgrown mess of legacy code mixed with haphazard addons of last minute customer requests and 'seemed to be a good idea' parts).

Somehow I doubt that MATLAB was involved in that Jeep-thing up in this topic somewhere, for example...
 
  • Like
Likes harborsparrow
  • #34
  • #35
This may seem off topic, but it might be related too, since we're talking about the phenomenon of software development managers possibly skimping on best practices to save money.

Remember that Toyota fiasco where cars would accelerate no matter what the driver did? The very description of the phenomenon screamed software race condition to anyone who ever wrote real-time software. Our 2015 Honda FIT has a feature that is actually a bug, where the turn signal, once activated, blinks at least 3 times even if you hit it by accident and immediately turn it off. In heavy traffic on a freeway, that is really not funny.

Anyway, about the Toyota recall, Google now says: "A brake override system was installed in many vehicles. This system ensures that the brake pedal takes precedence over the accelerator pedal...." OF COURSE the brake should override the accelerator. To the public, Toyota continued to claim that it was a sticking physical accelerator, but I think that was a blatant lie to avoid having to admit the problem was software. Because if it was software, the drivers could not be blamed.

End of rant.
 
  • Wow
Likes symbolipoint
  • #36
Determining requirements: Any Systems Engineer looking at how the judicial system collects testimony would have to be very dismayed. Interviewing "stakeholders" for the purpose of determining requirements requires skill, fluency, and the ability to be very flexible with how language is used - and for system engineering, that's a situation where both the stakeholder and the engineer are trying to be cooperative. Of course, in both cases, testimony or requirements are written down, restated to and by the author, and artifacts, graphics, etc are produced to assist in the testimony/communication.

Also, when the system involves software development or integration, the system requirements will include software quality and development requirements.

An engineer can often dictate to AI what the requirements are. So "Determining Requirements" in special cases can be placed on AI.

System and Testing Design: In software engineering, the requirements are used to generate the system design, the test plan, and the test procedures. Per the development requirements, those documents should normally be peer and/or stakeholder reviewed - so long as the "peer" is not AI.

In any engineering effort, you want to avoid catastrophe. If all you want is a tic-tac-toe game, you may decide to leave the entire work to AI. After all, the consequences are trite - perhaps you end up winning too often - or the games fails to recognize your wins.

But if the consequences are not trite, then you need to consider that, almost by definition, AI allows you to code without considering exactly how the results will be generated. The engineers needs to consider how they are going to handle that. Generally speaking, this means examining and understanding every line of AI generated code.
 
  • Like
Likes symbolipoint
  • #37
FactChecker said:
IMHO, peer review is not very effective in discovering bugs in software.
Peer code reviews are highly effective IF there is no negative consequence for anyone when faults are detected. The problem is, the theoretical software engineering "experts" decided to count everything, including faults found, and then managers (egged on by HR) decided to weaponize the "scores" from peer reviews to determine who was better at coding. Once that happened, by tacit agreement, none of us ever again turned in a report showing any faults for anyone's software (because, "do unto others what you would have them do to you"). If there are going to be "reports" of faults found from peer reviews, those reports must be anonymized. They were, at first, and then we found all kinds of things in any given review. It was painful to go through, but also extremely helpful to programmers as well as improving the resulting software. IMHO.
 
  • Like
Likes nsaspook and berkeman
  • #38
harborsparrow said:
Peer code reviews are highly effective IF there is no negative consequence for anyone when faults are detected.
I don't think that is a major factor. The problem is that peer review only catches the obvious bugs. That still leaves the need for extensive testing.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
3K
Replies
10
Views
4K
Replies
29
Views
5K
  • · Replies 5 ·
Replies
5
Views
744
Replies
7
Views
6K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
6K
Replies
14
Views
4K