How can scientists trust closed source programs?

AI Thread Summary
Scientists face challenges in trusting closed-source software due to concerns about undetected bugs and potential malicious code. While extensive testing, benchmarking, and routine quality assurance can help verify software reliability, there is no foolproof method to ensure absolute correctness. In fields like Medical Physics, independent checks and literature reviews are essential for establishing confidence in software used for critical calculations. Open-source software allows for greater scrutiny of code, potentially reducing the risk of hidden issues. Ultimately, both closed and open-source programs require careful evaluation and testing to mitigate risks associated with software reliability.
  • #51
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.

My collaborators and I tend to wring out all our analysis software thoroughly, closed source, open source, and written in house.

One standard operating procedure is to use the code on a wide array of inputs with known outputs.

Another is to repeat the analysis independently with different codes. For example, one collaborator might use MS Excel for a spreadsheet analysis, while another uses the LibreOffice spreadsheet. Or one may use a commercial stats package, while another uses custom software written in C or R.

I've always preferred data approaches that store data in a raw form and then proceed with analysis from that point in a way that several different independent analysis paths are possible.

The whole "repeatability" thing in experimental science not only provides an important buffer to errors in the original experiments, it provides an important buffer against analysis errors.
 
Last edited:
  • Like
Likes M Saad, JorisL and fluidistic
Technology news on Phys.org
  • #52
Worst case scenario - radiation overdoses occurred with the Therac-25, which removed hardware based safety measures and relied on software. Not mentioned in the wiki article was the initial "fix" was to remove the cursor up key cap from the VT100 terminal and telling operators not to use the cursor up key.

http://en.wikipedia.org/wiki/Therac-25
 
  • Like
Likes Buzz Bloom
  • #53
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.

It's easy to do. I've seen this type of qualification done for software used in life-critical systems before. As for new releases, you will have to requalify the software each time.

In a nut shell, you need to design validation tests for the software. You determine a set of test vectors (input values) and their expected outputs. Then you create a test procedure to perform that validation.

While it is essentially easy to do, it may be complicated to execute depending on what the software is supposed to do and the depth of testing required.

Lastly, in some applications such as aviation, DO-178B Level A, you may need to use an emulator to test decision points and branches within the software. The only way to do that is with the help of the software manufacture. That level of testing is probably beyond what you would need, but the point is, if a system is critical enough there are structured mechanisms to validate and certify them based on industry and military standards.
 
  • #54
rootone said:
Even those get ironed eventually by 'defensive' programming adjustments which detect and report improper input and so on before the program will proceed.
Hi rootone:
I believe that most software developers would expect that a program's user interface will check that input values are in the range acceptable to the program. Also, failure to have such a check would be considered to be a design bug.

BTW: As I recall, several decades ago there was an x-ray machine with built-in software that did not have such a check for input values, and a user error cause the death of a patient.

ADDED

I now see that post #52 already mentioned this.

Regards,
Buzz
 
  • #55
Software can be extremely deceptive, and extremely wrong, which is exactly why some of us raised big objections back in the 1980's when President Reagan wanted to fund research to shoot lasers in space at enemy targets. Bad idea, never to be trusted. Battle robots are an equally bad, bad idea. Either on purpose or accidently, almost all software on the planet does unexpected things once in a while.
 
  • #56
harborsparrow said:
almost all software on the planet does unexpected things once in a while.

How is that different from wetware?
 
  • Like
Likes mfb
Back
Top