How can scientists trust closed source programs?

In summary: There are various ways to perform these checks. One is to use a mathematical model of the system to verify that the calculations produce the correct results. This can be done using a variety of methods, including automated theorem provers and mathematical programming languages. In some cases, it may be possible to compare the results of the mathematical model with experimental data. If the model and data agree, this provides some level of confidence that the system is correct. Another approach is to use a version of the system that is not in use in the real world. This can be done by simulating the system on a computer, or by testing the system using a virtual environment. If the system behaves the same in the
  • #36
FactChecker said:
I don't think this is necessarily true. Some software can have a very informal development history. You would need to be very loose with the terminology to make that statement about all software.
Indeed. My experience has been that the formal process is the exception rather than the rule.
 
  • Like
Likes M Saad
Technology news on Phys.org
  • #37
stevendaryl said:
I didn't claim that open source would solve everything.
But seems to be that open source allows for the users to sometimes find these errors before running the program or may be able to fix it. So it does solve somethings, but open source can then be more buggy depending on the support from the company right?
 
  • #38
RaulTheUCSCSlug said:
open source allows for the users to sometimes find these errors before running the program or may be able to fix it.
\

Or for malicious users to use it to their advantage. I see no foundation to the presumption that open source volunteers all have good intentions.
 
  • #39
fluidistic said:
I wonder how can scientists trust closed source programs/softwares. How can they be sure there aren't bugs that return a wrong output every now and then?
As others said, they can't. Open-source software will have bugs too, and even in-house custom software developed for a large collaboration (like in particle physics) will have bugs. You just have to assume that these bugs are rare, and knowing that the bugs possibly exist, be on the lookout for possible problems.

RaulTheUCSCSlug said:
But seems to be that open source allows for the users to sometimes find these errors before running the program or may be able to fix it.
This might be true in principle, but few users, if any, are going to spend the time doing an extensive code review before using open-source software. If you run into strange behavior by some software, then you might go look into the code to see if there's something wrong. This kind of transparency is one of the main advantages of open-source software.

You may recall the bug in the Pentium. The problem wasn't so much that the bug existed. Any chip that complex is going to have bugs. It was Intel's not being transparent about the existence of the bug. Instead Professor Thomas R. Nicely had to waste a few months tracking down why his software was giving inconsistent results, only to discover Intel had already known about it.
 
  • Like
Likes jasonRF, M Saad and fluidistic
  • #40
I think the real strengths of open source are that you always have the option to make (or pay someone else to make) changes if you need, even if the main devs aren't interested or have gone bust, and that you can never be locked in by proprietary file or communication formats. That means that you can never find yourself with a piece of kit that you can't keep running anymore because of software issues. It might be expensive to take on code maintenance - but at least you have the option and aren't stuck trying to reverse engineer a proprietary black box.
 
  • Like
Likes M Saad
  • #41
FactChecker said:
I don't think this is necessarily true. Some software can have a very informal development history. You would need to be very loose with the terminology to make that statement about all software.
The original meaning of "hacking" was programming for the fun of programming. A Chech coworker of mine called it "happy engineering". And it is certainly possible for someone working out of their garage to create a useful product - as a solo effort.

I should have been clear that I was referring to more serious efforts - such as the question about an election system that I was responding to.
In general, the more complex the system and people are involved, the more needs to be written down.
 
  • Like
Likes M Saad
  • #42
.Scott said:
The original meaning of "hacking" was programming for the fun of programming. A Chech coworker of mine called it "happy engineering". And it is certainly possible for someone working out of their garage to create a useful product - as a solo effort.

I should have been clear that I was referring to more serious efforts - such as the question about an election system that I was responding to.
In general, the more complex the system and people are involved, the more needs to be written down.
A lot of code is initially developed informally. As the code evolves, it becomes larger and more useful. Then somebody wants to use it in a serious way and either doesn't know or doesn't care that it hasn't been fully validated. Unless there is enough time and money to refactor the code, it is likely to be used without a formal development process.
 
  • Like
Likes M Saad
  • #43
Even if the software is OK, it may be used incorrectly:
Review of the Use of Statistics in Infection and Immunity
"Typically, at least half of the published scientific articles that use statistical methods contain statistical errors. Common errors include failing to document the statistical methods used or using an improper method to test a statistical hypothesis.
...
The most common analysis errors are failure to adjust or account for multiple comparisons (27 studies), reporting a conclusion based on observation without conducting a statistical test (20 studies), and use of statistical tests that assume a normal distribution on data that follow a skewed distribution (at least 11 studies).
...
When variables are log transformed and analysis is performed on the transformed variables, the antilog of the result is often calculated to obtain the geometric mean. When the geometric mean is reported, it is not appropriate to report the antilog of the standard error of the mean of the logged data as a measure of variability.:wideeyed:
...
In summary, while most of the statistics reported in Infection and Immunity are fairly straightforward comparisons of treatment groups, even these simple comparisons are often analyzed or reported incorrectly. "​

Of course, physicists know better.​
 
  • Like
Likes M Saad, Dale and mfb
  • #44
FactChecker said:
A lot of code is initially developed informally. As the code evolves, it becomes larger and more useful. Then somebody wants to use it in a serious way and either doesn't know or doesn't care that it hasn't been fully validated. Unless there is enough time and money to refactor the code, it is likely to be used without a formal development process.
That would be an example of code that shouldn't be trusted - meaning, it shouldn't even be installed on a critical computer system. For example, most would consider Adobe Reader as non-critical software. What's the worse that could happen - it crashes and you can't read a document. But a few years ago it provided the entry point for a zero-day computer virus - a Trojan that used to install key-loggers and all sorts of other nasty things.
 
  • Like
Likes M Saad
  • #45
.Scott said:
That would be an example of code that shouldn't be trusted - meaning, it shouldn't even be installed on a critical computer system. For example, most would consider Adobe Reader as non-critical software.

What about the scientific calculator on the scientists desk; would you consider that critical? A wrong calculation could mislead the scientist.

Would you extend validation requirements down to the level of devices costing only a few dollars or a few pennies each, or would you trust certain manufacturers based only on their size and reputation?

Or perhaps you mean that trivial devices can't be critical?
 
  • #46
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases?.
The tests are repeated on every release. Actually, it tends to be even faster than than: every code commit. We use automated software to test the business logic of all of our software, unit tests to determine individual method accuracy, and we use external checkers to verify that not only the output was correct, but the algorithm used to make it.

Engineers test each other's code in a process called code review, that includes the tests, and they're really good at finding all of the weird cases. For example, if I were checking float add(float, float); I'd write tests for all combinations of adding: -inf, -2.5, -1, 0, 1, 2.5, inf, NaN, as well as checking two numbers that I know will overflow the float.
 
  • #47
anorlunda said:
What about the scientific calculator on the scientists desk; would you consider that critical? A wrong calculation could mislead the scientist.

Would you extend validation requirements down to the level of devices costing only a few dollars or a few pennies each, or would you trust certain manufacturers based only on their size and reputation?

Or perhaps you mean that trivial devices can't be critical?
First, let's talk about "critical". I mentioned "mission critical" before - and perhaps I abbreviated it as simply "critical". Generally, "mission critical" refers to components that must perform correctly in order to successfully complete a mission. And by mission, we are talking about thinks like allowing the LHC to work, allowing an Aircraft Carrier to navigate, allowing a Martian lander to explore (or allowing the Mars Climate Orbiter to orbit). Even if lives are not at stake (which they may), they involve major portions of hundreds of careers - or more.

Software development tools (including calculators) are certainly very important and need to be checked - and in some cases certified.

Physical calculator make for odd examples, because they are very unreliable. Not because they have programming defects - but because they rely on humans to key information in and transcribe the result back. For example, I would be astonished if critical LHC design issues were based on the results from desktop calculators.

On the other hand, a common spreadsheet program used in a common way on a trusted system is very reliable. With millions of global users exercising the application week after week - errors tend to found and corrected quickly. And, of course, the spread sheet program leaves an auditable artifact behind - the spreadsheet file.

Also, external calculations are not usually an Achilles heel. For example, calculations are often made in the development of test procedures - but a faulty computation would likely cause the program to fail and subsequent diagnostics would lead to the fault in the test.

Regarding manufacturers: Of course, it is certainly possible for a software tool manufacturer to be disqualified on the basis of reputation. But the focus is usually on the product - and the methods that the manufacturers uses to test and certify the tools - or the system developers ability to check the tool before committing to using it. For example, putting a Windows XP operating system in a mission critical system is pretty sketchy. But using a stripped down Windows XPe with the right test tools could make it a useful component in a system with other safeguards. But that wouldn't be good enough for a consumer automobile safety system - then you would need a certified compiler, certified operating system, etc.
 
  • #48
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.
To be fair, systems like that do exist, and http://www.linux.com/news/enterprise/high-performance/147-high-performance/666669-94-percent-of-the-worlds-top-500-supercomputers-run-linux-/ to answer your question, on a massive scale, a lot of research is done on Opensource systems. However, to answer your question about trusting proprietary software, the math and physics required to do code these research (simulation) software is extremely complex and those coders are paid a ton to make sure it works right when you plug numbers into it. You still have to know what you are punching in though. Source: A solar physicist at my University.
 
Last edited by a moderator:
  • #49
.Scott said:
First, let's talk about "critical". I mentioned "mission critical" before - and perhaps I abbreviated it as simply "critical". Generally, "mission critical" refers to components that must perform correctly in order to successfully complete a mission. And by mission, we are talking about thinks like allowing the LHC to work, allowing an Aircraft Carrier to navigate, allowing a Martian lander to explore (or allowing the Mars Climate Orbiter to orbit). Even if lives are not at stake (which they may), they involve major portions of hundreds of careers - or more.

A lowly chip in a single security badge could enable a saboteur to bring all that crashing down. The cliché is "The bigger they are, the harder they fall."

I think you're being pretentious in dismissing my point that software is all around us, in the mundane and trivial, as well as in the grand and elaborate. It is extreme and foolish to pretend that the mundane and trivial carry no risk to the mission.

At the other extreme. I'm currently working on an article about power grid cyber security. On that subject, the public and the politicians believe that every mundane digital device owned by a power company could be hacked to bring the end of civilization.

Neither extreme is valid
.

If anyone wants to make sweeping general statements about software, they should encompass the whole universe of software.
 
  • #50
anorlunda said:
A lowly chip in a single security badge could enable a saboteur to bring all that crashing down. The cliché is "The bigger they are, the harder they fall."

I think you're being pretentious in dismissing my point that software is all around us, in the mundane and trivial, as well as in the grand and elaborate. It is extreme and foolish to pretend that the mundane and trivial carry no risk to the mission.
I'm not sure where I pretended that the mundane and trivial carried no risk. Perhaps you didn't catch my allusion to the mundane and trivial that costed a very significant Mars mission - or my mention of the Adobe zero-day issue.
anorlunda said:
At the other extreme. I'm currently working on an article about power grid cyber security. On that subject, the public and the politicians believe that every mundane digital device owned by a power company could be hacked to bring the end of civilization.
Actually, I think most people have no opinion on the matter. On the other hand, I have worked with SCADA systems and was not completely satisfied with how well they were protected. And I think the Iranians were not fully satisfied with how well their Uranium refinery SCADA system was protected.
 
  • #51
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.

My collaborators and I tend to wring out all our analysis software thoroughly, closed source, open source, and written in house.

One standard operating procedure is to use the code on a wide array of inputs with known outputs.

Another is to repeat the analysis independently with different codes. For example, one collaborator might use MS Excel for a spreadsheet analysis, while another uses the LibreOffice spreadsheet. Or one may use a commercial stats package, while another uses custom software written in C or R.

I've always preferred data approaches that store data in a raw form and then proceed with analysis from that point in a way that several different independent analysis paths are possible.

The whole "repeatability" thing in experimental science not only provides an important buffer to errors in the original experiments, it provides an important buffer against analysis errors.
 
Last edited:
  • Like
Likes M Saad, JorisL and fluidistic
  • #52
Worst case scenario - radiation overdoses occurred with the Therac-25, which removed hardware based safety measures and relied on software. Not mentioned in the wiki article was the initial "fix" was to remove the cursor up key cap from the VT100 terminal and telling operators not to use the cursor up key.

http://en.wikipedia.org/wiki/Therac-25
 
  • Like
Likes Buzz Bloom
  • #53
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.

It's easy to do. I've seen this type of qualification done for software used in life-critical systems before. As for new releases, you will have to requalify the software each time.

In a nut shell, you need to design validation tests for the software. You determine a set of test vectors (input values) and their expected outputs. Then you create a test procedure to perform that validation.

While it is essentially easy to do, it may be complicated to execute depending on what the software is supposed to do and the depth of testing required.

Lastly, in some applications such as aviation, DO-178B Level A, you may need to use an emulator to test decision points and branches within the software. The only way to do that is with the help of the software manufacture. That level of testing is probably beyond what you would need, but the point is, if a system is critical enough there are structured mechanisms to validate and certify them based on industry and military standards.
 
  • #54
rootone said:
Even those get ironed eventually by 'defensive' programming adjustments which detect and report improper input and so on before the program will proceed.
Hi rootone:
I believe that most software developers would expect that a program's user interface will check that input values are in the range acceptable to the program. Also, failure to have such a check would be considered to be a design bug.

BTW: As I recall, several decades ago there was an x-ray machine with built-in software that did not have such a check for input values, and a user error cause the death of a patient.

ADDED

I now see that post #52 already mentioned this.

Regards,
Buzz
 
  • #55
Software can be extremely deceptive, and extremely wrong, which is exactly why some of us raised big objections back in the 1980's when President Reagan wanted to fund research to shoot lasers in space at enemy targets. Bad idea, never to be trusted. Battle robots are an equally bad, bad idea. Either on purpose or accidently, almost all software on the planet does unexpected things once in a while.
 
  • #56
harborsparrow said:
almost all software on the planet does unexpected things once in a while.

How is that different from wetware?
 
  • Like
Likes mfb

Similar threads

  • Programming and Computer Science
Replies
14
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
12
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
17
Views
2K
Replies
5
Views
2K
Replies
1
Views
808
  • STEM Career Guidance
Replies
3
Views
2K
  • STEM Career Guidance
Replies
9
Views
2K
  • STEM Academic Advising
Replies
13
Views
2K
  • Art, Music, History, and Linguistics
Replies
1
Views
1K
Back
Top