Who is responsible for the software when AI takes over programming?

  • Thread starter Thread starter symbolipoint
  • Start date Start date
AI Thread Summary
The discussion centers on the accountability for software created by AI, questioning who is responsible for issues like bugs and malfunctions. It emphasizes that companies remain liable for products that incorporate AI, similar to traditional software development. Users do not need technical expertise to engage with consumer products, shifting the responsibility to the manufacturers. The conversation also touches on open-source software, where responsibility is decentralized, allowing users to choose versions and improvements. Ultimately, the accountability for AI-generated software still lies with the companies that produce and market it.
symbolipoint
Homework Helper
Education Advisor
Gold Member
Messages
7,562
Reaction score
2,007
TL;DR Summary
I found an online article telling that Artificial Intelligence will take-over the whole cycle of developing software programs. The idea is troubling and I wonder.
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to enforce corrections and compensation for the users? Some real person must be able to take the blame, but no actual person wrote the in-the-future-or-maybe-even-now software program.

https://www.linkedin.com/pulse/deat...-programming-may-soon-shankar-munuswamy-wsllc
Also here's another article, not yet read through yet but,
https://medium.com/@hatim.rih/the-d...n-layers-are-pushing-traditional-076356db0ed9

As looking through that second article, possibly I am misunderstanding some of what I 'think' about the loss of need to know how to use or handle the code of a programming language...
 
Technology news on Phys.org
The same people who were responsible before the AI "took over".
If someone does their home work with AI assistance, it is still their homework.
If a company sells a product that includes AI-written software, that company is still responsible for the marketability of their product.

When you buy a consumer product, you don't need to have an engineering understanding of how it works. For example, if Tesla wants to use AI in its "Autopilot" to recognized road obstructions, and it doesn't work, your beef is with Tesla, not the computer.
 
  • Like
Likes russ_watters, DaveE, DaveC426913 and 2 others
.Scott said:
The same people who were responsible before the AI "took over".
If someone does their home work with AI assistance, it is still their homework.
The only exception I can think of is if AI tools are provided by a company that makes some guarantees of the results. Then they would take on some legal risk.
 
You folks need to see Mr Whipple’s Factory on the original Twilight Zone.

Machines replace the workers and Robbie the Robot replaces Mr Whipple.
 
  • Like
  • Informative
Likes DaveC426913 and symbolipoint
jedishrfu said:
You folks need to see Mr Whipple’s Factory on the original Twilight Zone.

Machines replace the workers and Robbie the Robot replaces Mr Whipple.
Watched that one recently. Binged the entire series in order.
 
symbolipoint said:
all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence.
Yes but not all aspects of ownership, marketing, sales, distribution, revenue and legality.

Software is a product of a company; software doesn't just burst forth into the world like an infant born in an empty meadow.
 
  • Like
Likes FactChecker
Until it becomes open source. I know a few companies who spun off software to open source.

Sometimes, they become Apache projects s with a while community of programmers improving and enhancing it.

Small companies do it when they realize that an internal project wile useful but not sellable would benefit by making it open source.
 
  • Like
Likes DaveC426913
Right, but in the case of open source software, the question of who takes responsibility answers itself. Anyone is free to make a branch and make improvements. Reviewers can bless or reject the changes; users can refuse to use it, preferring an earlier iteration.
 
Last edited:
  • Like
Likes russ_watters
No need for AI: you only need to read the EULA to know that (for common cases) nobody, ever was actually responsible.
 
Last edited:
  • Love
  • Wow
  • Like
Likes harborsparrow, symbolipoint, FactChecker and 1 other person
  • #10
IMO, current AI systems are far from being able to take over the whole of software development. One might trust a chatbot to cough up specific working bits of code to plug in, or more likely, use as a template, thus making coding go faster. Code generators are useful for mundane formulaic situations only, and overall program design, integration and testing has always been a highly creative process that is woefully underestimated because it is unseen and ill understood even by the managers who hire people to do it. Of course, nothing will prevent some pointy-haired managers from trying to replace human coders altogether, and it will likely take some dramatic train wrecks to prove that not to be wise.

All the current chat bots I and friends have tried are both helpful and, often, glaringly and shamelessly incorrect about things. Hilarious recent example of AI "logic" at work:

Person: How fast do otters swim?
AI Chatbot: River otters 8 to 10 mph. Sea otters 5 mph
Person: Do river otters swim faster than sea otters?
AI Chatbot: I Don’t know.
 
  • Wow
  • Like
Likes symbolipoint and FactChecker
  • #11
As for "many types of AI", I believe it just about all involves various machine learning algorithms based on Bayes Theorem, is probability based, must be trained, and is subject to statistical type errors. If the code being written by an AI system can afford to have errors in it, fine. But if it's a surgical robot or a laser weapon or any system affecting human life and needing to be reliable, well, right MOST of the time is not going to cut it. And frankly, I think people would prefer all their software to be right more than most of the time.

At this point, both the US and Russia have admitted to having shot down a passenger airliner mistakenly. There was software involved in the attempt to recognize what they were shooting at. That is a classic situation that AI is now already being used for, in warfare.

Think of email spam filters. We can at least intervene and rescue a misclassified email. So that is a perfectly good case where machine learning is good to use. But in today's world, it is being unleashed unwisely in many areas.
 
Last edited:
  • Wow
Likes symbolipoint
  • #12
In response to my: "The same people who were responsible before the AI "took over".
If someone does their home work with AI assistance, it is still their homework.", @FactChecker replied:
FactChecker said:
The only exception I can think of is if AI tools are provided by a company that makes some guarantees of the results. Then they would take on some legal risk.

This brings up a related topic - how you choose who you do business with and how you do that business.
First, if it's a free service, their "guarantee" is no more than puffery.
Second, even if you are paying for the AI service, your course instructor may be more sympathetic to "the dog ate my homework" than to "my AI supplier guaranteed the answers". More generally, any time a supplier provides a "guarantee" that seems unrealistic, you should expect that what you are actually buying is, at best, the recourse provided in the guarantee - and not a functional product.
 
  • #13
harborsparrow said:
IMO, current AI systems are far from being able to take over the whole of software development. One might trust a chatbot to cough up specific working bits of code to plug in, or more likely, use as a template, thus making coding go faster.
The trouble is that, collectively, they can improve in sophistication very quickly. They don't keep secrets or hide their code or any other things that might impede cross-pollination. And they can take in vast troves of data rapidly.

So, while they may be only doing simple tasks for now, I suspect the complexity will improve at a (which one is it? geometric? exponential?) rate.


harborsparrow said:
Person: How fast do otters swim?
AI Chatbot: River otters 8 to 10 mph. Sea otters 5 mph
Person: Do river otters swim faster than sea otters?
AI Chatbot: I Don’t know.
Ah, but this is a poor example of software dev, isn't it?
 
  • #14
DaveC426913 said:
The trouble is that, collectively, they can improve in sophistication very quickly. They don't keep secrets or hide their code or any other things that might impede cross-pollination. And they can take in vast troves of data rapidly.

So, while they may be only doing simple tasks for now, I suspect the complexity will improve at a (which one is it? geometric? exponential?) rate.



Ah, but this is a poor example of software dev, isn't it?
"I suspect the complexity will improve"

That is the question the money is riding on, will it keep scaling or is this "as good as it gets"?
Is it already good enough to change how we design and build complex tasks if this is "as good as it gets"?
The truth is, nobody knows. A Jevons paradox future says we will just make more complex software and use just as many people (software developers of all types) to fill the, more software being required void these system will create.

Things like the fake animal escape videos and other brain-dead media creations are low-hanging fruit AI slop.
 
  • Like
Likes harborsparrow
  • #15
DaveC426913 said:
The trouble is that, collectively, they can improve in sophistication very quickly. They don't keep secrets or hide their code or any other things that might impede cross-pollination. And they can take in vast troves of data rapidly.
There is money to be made. The best will hide their trade secrets.
Possibly just because they are the best, or they will be the best because they make a profit and can hire many of the best workers.
 
  • #16
AI is more than just code, so making it open source may not be the same as for other software. AI also has a knowledge base and must be trained, and if I understand correctly (I am out of my comfort zone here), varying the data trained on, or even the order in which the algorithm is fed data, may yield different outcomes downstream.

It's eerily like the nature vs nurture issue with, say, identical twins.
 
  • #17
harborsparrow said:
AI is more than just code,
What we're talking about here is using AI to write software. The code is the product.

For example, I had AI write the HTML and JavaScript for my "How old is my kitten" calculator. I could have written it myself but date-time routines in JavaScript irritate me so I just let AI do it.
 
  • Like
Likes harborsparrow
  • #18
My comments were in the context of whether AI can be guaranteed to perform well. AI that produces code cannot be made more reliable just by making it (i.e., the code-generating AI agent) open source where a lot of eyes review it's source code, because AI is Not Just Code.
 
Last edited:
  • Wow
Likes symbolipoint
  • #19
I still don't follow this logic.
harborsparrow said:
AI that produces code cannot be made more reliable just by making it open source where a lot of eyes review it's source code,
Why not?

Note: we're not improving the AI per se; we're improving the product - the code it writes.

The following 'because' doesn't seem to answer that:
harborsparrow said:
because AI is Not Just Code.
What does it matter who/what produced it? In fact, what if you don't even know who produced it? What if someone lied and told you a block of code were written by AI, but was, in fact, written by a human? Or vice versa - what if it were written by AI but we were told it was written by a human?

After all, I am a machine (albeit a squishy one) that writes code too, and I am Not Just Code either.

How does the silicon-and-memory-chip nature of AI - as opposed to the carbon-and-neuron nature of a human - change how the code might get reviewed and improved were it to be released as open source? The physical nature of the writer of the code does not survive the translation into the code in a consequential way. All we see is the code itself.
 
  • Haha
  • Like
Likes harborsparrow and renormalize
  • #20
I agree that AI will always be faster than humans, and managers will use it to generate software if it seems to save money. But because of the probabilistic nature of AI's underlying algorithms, I think AI coders will always misunderstand a few important things that (if it is true) would make AI agents dangerous in the end, and untrustworthy for writing mission critical code unless all of its output is carefully reviewed by humans, in which case it might not save much money after all.

The reason I intuit that AI may not do well at coding in the long run is that even the most straightforward machine learning algorithm we know of, the email spam filters that we've used for decades now, still regularly makes mistakes. Of course AI agents are using multiple strategies, but their learning algorithms are still at basis Bayes theorem, probabilistic, not guaranteed to do correct guessing all the time when "learning".

But I see Dave's point. If code is always carefully reviewed by humans, it doesn't matter if an AI wrote it. But how many software shops skimp on code reviews, assuming that other precautions have taken care of all the risks? Just saying I anticipate some train wrecks.

And then I can imagine potentially chaotic effects if the existing code fed into AI agents to help them become good at coding has undetected bugs, as almost all large masses of code do. It boggles the mind.
 
Last edited:
  • Wow
Likes symbolipoint
  • #21
AI LLMs are just the current fad. There are a lot of other techniques in AI.
IMHO, LLMs will be best where exact, correct answers for complicated questions are not needed. They are best at general thoughts. Unfortunately, that includes a lot of artistic subjects.
AI for computer programming is not like that. "AI" other than LLMs have been assisting computer programmers for a long time and will continue to get better.
If we are talking about using AI-generated code without reviews or modifications by programmers, that may be unwise for a long time.
 
Last edited:
  • Wow
  • Agree
Likes symbolipoint and harborsparrow
  • #22
AI "assisting" programmers is indeed extremely helpful. I use it too. I think the enquiry here was whether AI agents can or will or should be trusted to do the whole development process from a specification or design fed to it. It is impossible to know. Just remember that the U.S. and Russia have both now already shot down a passenger airliner "by mistake", presumably because software wasn't accurate enough in identifying the flying aircraft.

My cautionary statements are against turning AI loose in managing a software development project overall without close supervision on any mission critical software such as might result in the loss of human life.
 
  • Like
Likes FactChecker and symbolipoint
  • #23
There is a such thing as a QA testing cycle...
Don't think thats going away anytime soon.
 
  • #24
harborsparrow said:
AI "assisting" programmers is indeed extremely helpful. I use it too. I think the enquiry here was whether AI agents can or will or should be trusted to do the whole development process from a specification or design fed to it. It is impossible to know. Just remember that the U.S. and Russia have both now already shot down a passenger airliner "by mistake", presumably because software wasn't accurate enough in identifying the flying aircraft.
A current issue in air defense is that fast, stealthy airplanes require very fast reactions, faster than humanly possible. A US naval group has shot down its own plane. There are a lot of challenges with sensors and identifying aircraft, not just AI software. In a war scenario, the trade-off of risks sometimes forces us to trust automatic systems and the associated software. That is true whether the software is AI-generated or not.
 
  • Sad
Likes harborsparrow
  • #25
DaveC426913 said:
There is a such thing as a QA testing cycle...
Don't think thats going away anytime soon.
If only it was used.

https://arstechnica.com/cars/2025/10/software-update-bricks-some-jeep-4xe-hybrids-over-the-weekend/
Owners of some Jeep Wrangler 4xe hybrids have been left stranded after installing an over-the-air software update this weekend. The automaker pushed out a telematics update for the Uconnect infotainment system that evidently wasn't ready, resulting in cars losing power while driving and then becoming stranded.

Stranded Jeep owners have been detailing their experiences in forum and Reddit posts, as well as on YouTube. The buggy update doesn't appear to brick the car immediately. Instead, the failure appears to occur while driving—a far more serious problem. For some, this happened close to home and at low speed, but others claim to have experienced a powertrain failure at highway speeds.

Jeep pulled the update after reports of problems, but the software had already downloaded to many owners' cars by then. A member of Stellantis' social engagement team told 4xe owners at a Jeep forum to ignore the update pop-up if they haven't installed it yet.



Infotainment software bricks (not really, seems there is a update that fixes the issues) engine control software?

https://www.stellantis.com/en/news/...ion-to-accelerate-enterprise-wide-ai-adoption

TURIN – Today at Italian Tech Week, Stellantis and Mistral AI announced a new milestone in their partnership to accelerate the integration of artificial intelligence (AI) across Stellantis’ operations, deepening a collaboration that has already delivered tangible results in automotive innovation.

Over the past 18 months, the two companies have successfully worked together to develop advanced AI solutions and integrate Mistral AI’s models into a range of use cases, from next-generation in-car assistant to AI-powered business and engineering workflows. Starting today, Stellantis and Mistral AI enter a new phase of their collaboration with the ambition to form a strategic alliance that embeds AI at the core of Stellantis’ operations, unlocking efficiency, agility, and customer value at scale.

“Our work with Mistral AI is helping us move faster and smarter. What makes this partnership unique is Mistral AI’s ability to work closely with Stellantis to deliver meaningful results,” said Ned Curic, Stellantis Chief Engineering & Technology Officer. “Together, we are shaping intelligent, adaptable systems that bring real value to our customers and help Stellantis stay ahead.”
 
Last edited:
  • Wow
Likes DaveC426913
  • #26
AI should not, in my view, be used in weapons at all. This will be an unpopular view. I feel certain it has been happening for many years already.

In the 1980's at Bell Labs, I worked in software for long-haul lightwave communications and later for managing networks of these systems. I periodically had to battle co-workers who wanted to cut corners or take risks that could result in an occasional dropped bit or a deadlock or a race condition, "because it will never actually happen in a million years and we don't have the money to do it this more complex way". There was this enormous pressure to shortcut the code review process ("testing is enough"), and then they started holding people's mistakes against them after code reviews, so people didn't report bugs else they would piss off their colleagues. In 1990, dozens of us were called up at 3 a.m. to investigate a failure of long-distance phone calling across the entire US, which led to a congressional hearing in 1991. The number lookup system that translated dialed numbers into routing had just been updated, so we knew where to look, and within 2 hours, someone found the cause--a simple deadlock in a shared resource, and that should have been caught in design and code reviews. But someone had skimped or gamed the review process.

You can't tell me that human eyes are going to inspect all this AI-generated code before it is used in weapons systems. It's not the human way. The first instinct of so many managers, and even some software developers who want to take an easier path with managers, is cut costs no matter what.
 
  • Like
  • Wow
Likes FactChecker, DaveC426913 and symbolipoint
  • #27
harborsparrow said:
AI should not, in my view, be used in weapons at all. This will be an unpopular view. I feel certain it has been happening for many years already.
"AI" is a broad spectrum of techniques and applications. IMO, there is a gray area regarding what would be called AI. Some single-application neural networks have been used for decades because of its ability to fit data.
harborsparrow said:
In the 1980's at Bell Labs, I worked in software for long-haul lightwave communications and later for managing networks of these systems. I periodically had to battle co-workers who wanted to cut corners or take risks that could result in an occasional dropped bit or a deadlock or a race condition, "because it will never actually happen in a million years and we don't have the money to do it this more complex way". There was this enormous pressure to shortcut the code review process ("testing is enough"), and then they started holding people's mistakes against them after code reviews, so people didn't report bugs else they would piss off their colleagues. In 1990, dozens of us were called up at 3 a.m. to investigate a failure of long-distance phone calling across the entire US, which led to a congressional hearing in 1991. The number lookup system that translated dialed numbers into routing had just been updated, so we knew where to look, and within 2 hours, someone found the cause--a simple deadlock in a shared resource, and that should have been caught in design and code reviews. But someone had skimped or gamed the review process.

You can't tell me that human eyes are going to inspect all this AI-generated code before it is used in weapons systems. It's not the human way. The first instinct of so many managers, and even some software developers who want to take an easier path with managers, is cut costs no matter what.
IMHO, peer review is not very effective in discovering bugs in software. Much more effective are coding standards to avoid error-prone types of code, thorough test sets (covering all possible paths), and extensive tests in simulators (with error reviews and thorough searches for similar errors).
I don't know the current capability of AI tools to verify code logic, but that is on the horizon.
 
  • #28
Factchecker, we'll have to disagree about the usefulness of code reviews. I recall them catching many issues, but I also recall them becoming far less effective when the software engineering pinheads came up with the idea of basing performance reviews on how few faults were found in one's code during the reviews. So I think their effectiveness is linked to management style and practices.
 
  • #29
harborsparrow said:
Factchecker, we'll have to disagree about the usefulness of code reviews. I recall them catching many issues
Yes, although I question how code reviews of AI generated code might go. I imagine it will get increasingly inscrutable. I suspect there will a lot of "why is it doing it like that? That's so backwards. And what's this thing over here? Where's the modularity? I'm tempted to tear it down and rewrite it myself."
 
  • Love
Likes harborsparrow
  • #30
harborsparrow said:
Factchecker, we'll have to disagree about the usefulness of code reviews.
I probably should have said "not nearly enough". They do no harm, and can catch obvious mistakes. That leaves the question of how to catch the remaining 30% of the errors.
 
  • Like
Likes harborsparrow
  • #31
harborsparrow said:
You can't tell me that human eyes are going to inspect all this AI-generated code
With the so called 'increased effectiveness' of AI-supported coding the humans assigned will need to watch over lot more and lot less familiar code than ever before.
 
  • Agree
Likes DaveC426913
  • #32
Rive said:
With the so called 'increased effectiveness' of AI-supported coding the humans assigned will need to watch over lot more and lot less familiar code than ever before.
If you've ever seen code from MATLAB's Simulink autocode generation, you know that it is not the kind of code you would want to review. For one thing, all the variables names are automatically generated and obscure.
That being said, with diagram standards that avoid problem areas, massive programs can be generated that are very reliable -- far better than human programmers could do.
 
  • Wow
Likes symbolipoint
  • #33
FactChecker said:
MATLAB's Simulink autocode generation
Nice, but most programmers I know (and are blessed with AI-supportive management) deals with lot less structured/organized/proper code (that lot less was one very polite description of that usual overgrown mess of legacy code mixed with haphazard addons of last minute customer requests and 'seemed to be a good idea' parts).

Somehow I doubt that MATLAB was involved in that Jeep-thing up in this topic somewhere, for example...
 
  • #34
Back
Top