Who is responsible for the software when AI takes over programming?

  • Thread starter Thread starter symbolipoint
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the implications of artificial intelligence taking over programming tasks, particularly focusing on the question of responsibility for software issues that may arise from AI-generated code. Participants explore the potential future of programming, the accountability of companies, and the nature of software ownership and liability.

Discussion Character

  • Debate/contested
  • Conceptual clarification
  • Exploratory

Main Points Raised

  • Some participants express concern about who will be held accountable for software malfunctions caused by AI, suggesting that a real person must be responsible despite the AI's involvement.
  • Others argue that the same entities responsible for software before AI—such as companies and individuals—will remain accountable, emphasizing that using AI does not absolve them of responsibility.
  • A viewpoint is presented that if AI tools are provided by a company with guarantees, that company may assume legal risks associated with the software's performance.
  • Some participants highlight that open source software complicates responsibility, as anyone can contribute to improvements or fixes, making accountability more diffuse.
  • Concerns are raised about the reliability of AI systems, particularly in critical applications, with participants noting that AI may not yet be capable of fully replacing human programmers.
  • There is a discussion about the rapid improvement of AI capabilities, with some suggesting that while current AI may handle simple tasks, its sophistication could increase significantly over time.
  • Participants reflect on the limitations of AI, citing examples of errors in AI responses and the importance of human oversight in software development.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the issue of responsibility for AI-generated software. While some agree that companies remain accountable, others raise questions about the implications of AI's role in programming and the nature of guarantees provided by AI services.

Contextual Notes

There are unresolved questions regarding the legal implications of AI-generated software, the nature of guarantees provided by AI services, and the evolving capabilities of AI in software development.

  • #31
harborsparrow said:
You can't tell me that human eyes are going to inspect all this AI-generated code
With the so called 'increased effectiveness' of AI-supported coding the humans assigned will need to watch over lot more and lot less familiar code than ever before.
 
  • Agree
Likes   Reactions: harborsparrow and DaveC426913
Technology news on Phys.org
  • #32
Rive said:
With the so called 'increased effectiveness' of AI-supported coding the humans assigned will need to watch over lot more and lot less familiar code than ever before.
If you've ever seen code from MATLAB's Simulink autocode generation, you know that it is not the kind of code you would want to review. For one thing, all the variables names are automatically generated and obscure.
That being said, with diagram standards that avoid problem areas, massive programs can be generated that are very reliable -- far better than human programmers could do.
 
  • Wow
Likes   Reactions: symbolipoint
  • #33
FactChecker said:
MATLAB's Simulink autocode generation
Nice, but most programmers I know (and are blessed with AI-supportive management) deals with lot less structured/organized/proper code (that lot less was one very polite description of that usual overgrown mess of legacy code mixed with haphazard addons of last minute customer requests and 'seemed to be a good idea' parts).

Somehow I doubt that MATLAB was involved in that Jeep-thing up in this topic somewhere, for example...
 
  • Like
Likes   Reactions: harborsparrow
  • #34
  • #35
This may seem off topic, but it might be related too, since we're talking about the phenomenon of software development managers possibly skimping on best practices to save money.

Remember that Toyota fiasco where cars would accelerate no matter what the driver did? The very description of the phenomenon screamed software race condition to anyone who ever wrote real-time software. Our 2015 Honda FIT has a feature that is actually a bug, where the turn signal, once activated, blinks at least 3 times even if you hit it by accident and immediately turn it off. In heavy traffic on a freeway, that is really not funny.

Anyway, about the Toyota recall, Google now says: "A brake override system was installed in many vehicles. This system ensures that the brake pedal takes precedence over the accelerator pedal...." OF COURSE the brake should override the accelerator. To the public, Toyota continued to claim that it was a sticking physical accelerator, but I think that was a blatant lie to avoid having to admit the problem was software. Because if it was software, the drivers could not be blamed.

End of rant.
 
  • Wow
Likes   Reactions: symbolipoint
  • #36
Determining requirements: Any Systems Engineer looking at how the judicial system collects testimony would have to be very dismayed. Interviewing "stakeholders" for the purpose of determining requirements requires skill, fluency, and the ability to be very flexible with how language is used - and for system engineering, that's a situation where both the stakeholder and the engineer are trying to be cooperative. Of course, in both cases, testimony or requirements are written down, restated to and by the author, and artifacts, graphics, etc are produced to assist in the testimony/communication.

Also, when the system involves software development or integration, the system requirements will include software quality and development requirements.

An engineer can often dictate to AI what the requirements are. So "Determining Requirements" in special cases can be placed on AI.

System and Testing Design: In software engineering, the requirements are used to generate the system design, the test plan, and the test procedures. Per the development requirements, those documents should normally be peer and/or stakeholder reviewed - so long as the "peer" is not AI.

In any engineering effort, you want to avoid catastrophe. If all you want is a tic-tac-toe game, you may decide to leave the entire work to AI. After all, the consequences are trite - perhaps you end up winning too often - or the games fails to recognize your wins.

But if the consequences are not trite, then you need to consider that, almost by definition, AI allows you to code without considering exactly how the results will be generated. The engineers needs to consider how they are going to handle that. Generally speaking, this means examining and understanding every line of AI generated code.
 
  • Like
Likes   Reactions: symbolipoint
  • #37
FactChecker said:
IMHO, peer review is not very effective in discovering bugs in software.
Peer code reviews are highly effective IF there is no negative consequence for anyone when faults are detected. The problem is, the theoretical software engineering "experts" decided to count everything, including faults found, and then managers (egged on by HR) decided to weaponize the "scores" from peer reviews to determine who was better at coding. Once that happened, by tacit agreement, none of us ever again turned in a report showing any faults for anyone's software (because, "do unto others what you would have them do to you"). If there are going to be "reports" of faults found from peer reviews, those reports must be anonymized. They were, at first, and then we found all kinds of things in any given review. It was painful to go through, but also extremely helpful to programmers as well as improving the resulting software. IMHO.
 
  • Like
Likes   Reactions: nsaspook and berkeman
  • #38
harborsparrow said:
Peer code reviews are highly effective IF there is no negative consequence for anyone when faults are detected.
I don't think that is a major factor. The problem is that peer review only catches the obvious bugs. That still leaves the need for extensive testing.
 
  • #39
harborsparrow said:
Peer code reviews are highly effective IF there is no negative consequence for anyone when faults are detected. The problem is, the theoretical software engineering "experts" decided to count everything, including faults found, and then managers (egged on by HR) decided to weaponize the "scores" from peer reviews to determine who was better at coding. Once that happened, by tacit agreement, none of us ever again turned in a report showing any faults for anyone's software (because, "do unto others what you would have them do to you"). If there are going to be "reports" of faults found from peer reviews, those reports must be anonymized. They were, at first, and then we found all kinds of things in any given review. It was painful to go through, but also extremely helpful to programmers as well as improving the resulting software. IMHO.
Your experience with code reviews is bizarrely different than mine.
1) If I thought someone was holding back on finding or reporting errors in my code, I would visit their office and work things out with them.
2) Reviewers should be getting a "review package" that includes the what is required of the code changes, code additions and updates, the test code additions and updates, and test reports - including a regression test report (along with things like project charge codes).

The first item reviewed is whether that review package is complete. Reviewers can certainly ask for additional test cases if it looks like something is missing.

Supposedly, the author has already written and tested the changes - so their shouldn't be that many bugs.

And from the authors point of view, addressing review issues is the short home-stretch to a much longer process of discovering exactly what the requirements are and putting a solution together.

Most coders prefer coding over reviewing. So, the politics in the places I have worked is more like, if you get to my reviews quickly, I will get to your reviews quickly. Another "politic" that I try to add is to make the review package as useful easy to understand as possible - hoping that certain practices (such as automatically capturing the full test environment in summary reports) catch on.

I certainly understand why "experts", especially "subject-matter experts" are used in a peer-review process - and, hopefully well before the review begins. But if the author looses ownership of the project to those experts, then it isn't really a "peer" review.

From time to time, I point out that software engineering is not a good sport for those who hate being wrong. If only 99% of your lines of code are perfect, your program will simply not work. At 99.9%, it will probably show signs of life, but is unlikely to be useful. At 99.99%, it will probably be usable - but those last bugs will definitely be noticed - and you are not done.
 
  • Like
Likes   Reactions: harborsparrow and FactChecker

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
3K
Replies
10
Views
5K
Replies
29
Views
5K
  • · Replies 5 ·
Replies
5
Views
1K
Replies
7
Views
6K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 6 ·
Replies
6
Views
6K
Replies
14
Views
5K