How can scientists trust closed source programs?

AI Thread Summary
Scientists face challenges in trusting closed-source software due to concerns about undetected bugs and potential malicious code. While extensive testing, benchmarking, and routine quality assurance can help verify software reliability, there is no foolproof method to ensure absolute correctness. In fields like Medical Physics, independent checks and literature reviews are essential for establishing confidence in software used for critical calculations. Open-source software allows for greater scrutiny of code, potentially reducing the risk of hidden issues. Ultimately, both closed and open-source programs require careful evaluation and testing to mitigate risks associated with software reliability.
fluidistic
Gold Member
Messages
3,928
Reaction score
272
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.
 
  • Like
Likes M Saad, nrqed and Greg Bernhardt
Technology news on Phys.org
You can't, but then open source software is not immune to bugs either.
 
Unfortunately I think there is a lot of blind trust in both closed source programs (and open ones for that matter).

That said, proper operation of a code is verified through (i) benchmarking and (ii) routine quality assurance testing, and (iii) independent checking. In my field, Medical Physics, for example, we often use commercial software for planning radiation therapy treatments in the clinic. They determine where the radiation dose goes in the patient and what parameters to set on the linear accelerator to deliver the intended treatment. It's very important that these codes get these calculations correct every time.

So before implementing clinically, we first have to run through a set of basic tests to confirm that the code accurately reproduces measurements under given conditions. Of course even before this, we go through the literature, where these tests have been performed by others. This is how we can establish how reliable the given algorithm is and conditions under which any assumptions break down. This also let's us know what a reasonable tolerance is - how close to measurement values can we expect to get. Then we run through a set of our own tests confirming that our version performs as advertised. Of course, you can't test everything, but you can try to approximate both commonly encountered situations and extreme situations where the code may not perform so well.

Once you've effectively benchmarked your code, it's also important to put it through routine quality assurance testing. So, for example, you may want to repeat a subset of your benchmarking calculations once a month, or after a software version upgrade, or after a patch installation, to assure yourself that your code is still performing as you expect.

Finally, when it comes to something critical like clinical calculations, we confirm the results through redundant, and independent checks or measurements. This can be as simple as performing a hand calculation or using a completely different planning system to redo the calculation. When independent systems arrive at the same answer, you have some increased confidence that the answer is correct. It's still possible they can both arrive at the wrong answer - GIGO and all that - but this serves to increase confidence that at least your black box is working as expected.

On a research front, it's important to be doing the same things.
 
  • Like
Likes Jozape, Buzz Bloom, M Saad and 6 others
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.

Why trust anything? How do you know the brakes on your car won't fail after 1000 miles? And, when you fly, do you do a personal check of the aircraft's engine, guidance systems and everything?

Or, if you buy copper sulphate from someone, how do you know it really is copper sulphate? Or, if you buy a box of drugs, how do you know some of them aren't a placebo? Or, a different drug altogether?

In order to fully check the source code for a system, you would need to be relatively expert in the software technologies. Many systems may be an integration of several technolgies, so no one person (even in the software development company) would be able to check it: you would need a team of people. Even then, source code may be inscrutable without all the software development facilities that were used to create it. In fact, from a software engineering perspective, starting with the source code would be a very inefficient way to verify a system.
 
  • Like
Likes RJLiberator, Samy_A, Redbelly98 and 2 others
For use in applications where safety is involved, there is usually a certification process that must be done before the software can be used. @Choppy 's example of radiation therapy is a good example of that. But then the computer operating system and compilers must also be certified. They are not just trusted blindly.

In the case of scientific research that does not have safety consequences, there is no formal certification process. You should not be reckless about the software you pick to use. Don't use experimental versions unless you want to do a lot of testing that has nothing to do with your research. Unless you are doing something really unusual, there are probably well tested versions of software that you can use.
 
  • Like
Likes QuantumQuest and fluidistic
There is no "magic bullet" to guarantee valid code. Making bug-free software is an example of "defense-in-depth". There are software development process that should be followed, unit testing, integrated testing, code standards, code review processes, etc., etc., etc. There is a set of code standards, MISRA-C that tells you what you should do or not do in your code. There are several code analysis tools that examine code for risky practices, test coverage, etc. Even when all the processes and rules are followed, some bugs escape to the released software. Then it is up to the public and the developer to spot and fix the mistakes.
 
PeroK said:
Why trust anything? How do you know the brakes on your car won't fail after 1000 miles? And, when you fly, do you do a personal check of the aircraft's engine, guidance systems and everything?

Or, if you buy copper sulphate from someone, how do you know it really is copper sulphate? Or, if you buy a box of drugs, how do you know some of them aren't a placebo? Or, a different drug altogether?
When you safely land with the plane, you know that if there was a problem it did not matter at all for you, unlike the case of having the output of a closed source program where you have no intuition on whether the results are fine or whether they are too low/high by say 0.8%. You don't necessarily get the feedback you'd get with a drug or plane or coppers sulphate.

In order to fully check the source code for a system, you would need to be relatively expert in the software technologies. Many systems may be an integration of several technolgies, so no one person (even in the software development company) would be able to check it: you would need a team of people. Even then, source code may be inscrutable without all the software development facilities that were used to create it. In fact, from a software engineering perspective, starting with the source code would be a very inefficient way to verify a system.
Of course checking the full source code is most of the time impossible. But one does not generally use all the functionalities of a complicated software either. Say I use a program that fits X ray diffractograms and the program claims to be using "name_of_algorithm"'s algorithm and I want to check out how exactly it's implemented. Or say the program claims to use the Scherrer equation to give out the crystallite's size but does not specify the value it uses for "K", the shape factor. In both cases it gets complicated to figure out what the program is really doing.

I realize that when publishing a paper a scientist could specify that the values were calculated via "name_of_software_name_of_version". So that if one day someone realize something was faulty with that software, then one should either fix the results of the published scientist or discard them.
 
fluidistic said:
When you safely land with the plane, you know that if there was a problem it did not matter at all for you
And if you crash, you know there was a problem. Which is certainly a much worse outcome than a value in a publication that is off a bit (with a few exceptions, like studies related to safety of systems like aircrafts...).

But unlike the aircraft you use, you can check the software. Run it on test cases where you know the expected outcome. Sure, they don't cover everything, but if the software passes all test cases you can be quite confident that it works with your actual data as well.
This is routinely done for basically all software packages.
 
  • Like
Likes fluidistic
mfb said:
Sure, they don't cover everything, but if the software passes all test cases you can be quite confident that it works with your actual data as well.
This is routinely done for basically all software packages.
That's what I wanted to know and that's reassuring. If the information of which tests have been done for which program and which version were available to the public, that would be great.
 
  • #10
One way to test it is to develop a test suite. However, even then its possible that a bug would slip through.

If you recall there was the famous Pentium bug that occurred under specific circumstances:

https://en.wikipedia.org/wiki/Pentium_FDIV_bug

It would appear as a software bug but was in fact hardware related.
 
  • #11
fluidistic said:
That's what I wanted to know and that's reassuring. If the information of which tests have been done for which program and which version were available to the public, that would be great.
Publications often have a limited size, you cannot write up every little detail.
 
  • #12
Popular software products often have web sites where bugs are reported and discussed.
 
  • Like
Likes fluidistic
  • #13
jedishrfu said:
If you recall there was the famous Pentium bug that occurred under specific circumstances

Ah yes. They call it "floating" point for a reason. :)
 
  • Like
Likes Stephanus
  • #14
I don't want to be paranoid, but there is a huge difference between testing for accidental programming errors and testing for intentional malicious code. Intentionally malicious code can be programmed to only show up under certain circumstances that may not ever occur during testing. To me, that's a big difference between open source and closed source. In the case of open source, you can actually study the code to see if there is peculiar logic that would only show up in certain circumstances.
 
  • Like
Likes Buzz Bloom and FactChecker
  • #15
stevendaryl said:
I don't want to be paranoid, but there is a huge difference between testing for accidental programming errors and testing for intentional malicious code. Intentionally malicious code can be programmed to only show up under certain circumstances that may not ever occur during testing. To me, that's a big difference between open source and closed source. In the case of open source, you can actually study the code to see if there is peculiar logic that would only show up in certain circumstances.

Paranoia may be justified in the case of software where there might be a motivation to sometimes giving the wrong answer (for example, the code that counts electronic votes in an election).
 
  • Like
Likes Buzz Bloom
  • #16
In the end software development is just another engineering discipline.
As with any other, the first version of something usually does have unexpected bugs, even though there may have been a lot of time dedicated to testing and QA.
As the product matures later versions become more reliable until there are no longer are a significant number of bug reports, and those that there are frequently are not actually bugs but operator or input data errors.
Even those get ironed eventually by 'defensive' programming adjustments which detect and report improper input and so on before the program will proceed.
 
  • #17
This is a worthy topic. I think perhaps the focus on errors is too narrow. It is even too narrow to focus on closed software, or to focus on software at all. There are many ways for things to go wrong or right. Machines add some new risks and reduce other risks.

May I recommend the book "Computer Related Risks" by Peter G. Neumann. Using numerous case histories, the book illustrates the nature and number of risks involved when humans and machines interact. It was written way back in 1994, but it is not at all dated. Many of the mistakes committed in decades past will be repeated in decades future. If you do read it, I expect that you will see that a much broader view of risks is appropriate.
 
  • #18
The VW experience is another example of software giving maliciously incorrect answers during pollution tests.
 
  • #19
stevendaryl said:
Paranoia may be justified in the case of software where there might be a motivation to sometimes giving the wrong answer (for example, the code that counts electronic votes in an election).
How would you know that an election system was actually running the open source code that someone claimed it was?
 
  • Like
Likes harborsparrow
  • #20
PeroK said:
How would you know that an election system was actually running the open source code that someone claimed it was?
An individual voter may not know that, but if it happened it would be fraud and it would have to be perpetrated on a massive scale to be effective.
Such a conspirarcy theory falls down at the first hurdle, like the 'moon landing hoax' theory.
The conspiracy would need to involve many people, hundreds at least, keeping silent about something they knew about.
That it's realistically not possible,
 
  • Like
Likes 1oldman2
  • #21
PeroK said:
How would you know that an election system was actually running the open source code that someone claimed it was?

I didn't claim that open source would solve everything.
 
  • #22
stevendaryl said:
I don't want to be paranoid, but there is a huge difference between testing for accidental programming errors and testing for intentional malicious code. Intentionally malicious code can be programmed to only show up under certain circumstances that may not ever occur during testing. To me, that's a big difference between open source and closed source. In the case of open source, you can actually study the code to see if there is peculiar logic that would only show up in certain circumstances.
Good point. Of course, the original code might just contain vulnerabilities to malicious attack. That is yet another problem since the code tested may not yet contain the malicious code. Some software analysis products are available to scan code for vulnerabilities. One is Coverity Static Code Analysis. I don't have much experience with it, so I don't know how well it works. I can say that it can find a lot of vulnerability and bad practices in code. But I don't know how much is left that it doesn't find.
 
  • #23
stevendaryl said:
I didn't claim that open source would solve everything.
Yes, I know. I didn't intend it like that. Election software is a good example because it's not clear who needs to trust whom.
 
  • Like
Likes stevendaryl
  • #24
jedishrfu said:
One way to test it is to develop a test suite. However, even then its possible that a bug would slip through.
Yes, and to complicate matters, bugs can also be found in the test suites themselves.
 
  • #25
rootone said:
An individual voter may not know that, but if it happened it would be fraud and it would have to be perpetrated on a massive scale to be effective.
There was a surprisingly small number of votes involved in Florida 2004.

Is there a lot of closed-source software specifically produced for science? I have worked with both open-source and closed-source software, but the latter only as standard programs. I can't imagine the programmers of e.g. Matlab introducing malicious code to mess around with some specific particle physics publications. How would you do that (without even knowing if and where Matlab would be used) and where would be the point?
Jaeusm said:
Yes, and to complicate matters, bugs can also be found in the test suites themselves.
Unless two bugs cancel each other, you still see that something needs more attention.
 
  • Like
Likes M Saad
  • #26
rootone said:
An individual voter may not know that, but if it happened it would be fraud and it would have to be perpetrated on a massive scale to be effective.
Such a conspirarcy theory falls down at the first hurdle, like the 'moon landing hoax' theory.
The conspiracy would need to involve many people, hundreds at least, keeping silent about something they knew about.
That it's realistically not possible,

It wouldn't take a major conspiracy to introduce malicious code at an appropriate point in the release cycle. It's approximately the same as a software vendor selling malicious code in the first place. It just depends on who you trust.

On a less dramatic note, many system support teams, in my experience, insist on getting the source code and recompiling it in every environment, thereby introducing a major uncertainty about whether the system in live is the same as the one that was tested.

From my experience, more problems in a live system are caused by environmental and configuration issues than by traditional software bugs.
 
  • #27
stevendaryl said:
I don't want to be paranoid, but there is a huge difference between testing for accidental programming errors and testing for intentional malicious code. Intentionally malicious code can be programmed to only show up under certain circumstances that may not ever occur during testing. To me, that's a big difference between open source and closed source. In the case of open source, you can actually study the code to see if there is peculiar logic that would only show up in certain circumstances.
For mission critical and life safety systems, part of the design includes defending against malicious attacks. Moreover, you need to have traceability from the source code and other source components to the final system images. And you need to have controls in place that guarantees that what is being used in the manufacturing process is exactly what was tested during the release process.

As far as source code is concerned, yes every line code is peer reviewed. And, as was mentioned earlier, there are often standards such as MISRA that need to be followed. Those standards do two things: They minimize the influence of a single bug, and they make the code easier to be reviewed. Also, there are tools that scan the source code (static software tests) to report violations of MISRA standards.

There are also several layers of testing. First, there is modular black-box testing where individual software modules are tested by software programs that are developed based on the documented design for the target module. Then there is white box testing, where every line of code is checked - and there are "code coverage tools" for measuring how thorough this white box testing is. Any line of code that is not covered by a test needs to be examined so that it is understood why it cannot be directly tested.

Then, there is integration, system testing, and field testing. In each case, the test procedure is developed and checked by to determine whether the tests are comprehensive.

Finally, to the extent possible, the system is designed to be fault tolerant. Commonly, there are two separate software systems - each designed by separate software teams - so if one fails, there is a backup. And the system itself is commonly designed so that the mechanics limit the damage that can be done and afford a manual override.
 
  • Like
Likes M Saad, mfb and FactChecker
  • #28
PeroK said:
Yes, I know. I didn't intend it like that. Election software is a good example because it's not clear who needs to trust whom.
With any system there are "stakeholders" who determine the requirements and audit the testing - and in many cases, fund the development process. In the cases of an election system, the corporate entity that is funding the project is the first stakeholder. They not only want the system to work, but should want that testing to be auditable. Their customers, mostly State and municipalities, are also stakeholders. For their money, they are going to want some reason to be confident that the systems work and are hardened against fraud.
 
  • #29
fluidistic said:
When you safely land with the plane, you know that if there was a problem it did not matter at all for you, unlike the case of having the output of a closed source program where you have no intuition on whether the results are fine or whether they are too low/high by say 0.8%.
When you safely land with the plane, you don't know so much. You know the software worked in that flight, but not how will it behave in another flight under different circumstances. For example, Pitot tube freezing ---which can happen, or not---, depending on a large amount of other variables, has sometimes resulted in dangerous software behavior and at least once that was instrumental (together with human error / disorientation) in a major air disaster... with an aircraft that had otherwise flown over 2,600 times without any relevant trouble. (To make things worse, the problem was known and studied and there were even procedures in place to handle it, but it hadn't been thoroughly corrected because of a diversity of reasons that would be too long to explain here.)

Advanced aviation software is as closed and proprietary as it can gets, Boeing or Airbus or the like are not going to show you their or their providers' industrial secrets. But I don't think an open source approach would improve things very much. First, there are not so many people able to properly evaluate advanced, model-specific avionics code under realistic conditions... maybe a few major airlines could be able to if they were allowed and decided to spend their money doing it, but that's all (the real method is notifying the manufacturer about any perceived glitch.) I'd say this is applicable to other highly specialized industries like nuclear power plants, refineries, etc. Heavy testing, redundant systems and certification (and re-certification as needed) is the way to go. And well... we don't use to have many nuclear disasters, burning refineries or even software-caused air disasters. They can happen, yes. But it's highly improbable, even in "analogic" real-life situations with huge amounts of not-so-predictable interacting variables.

I'm not sure how this applies to purely scientific fields, but I wouldn't be surprised if we found quite a few analogies.
 
Last edited:
  • Like
Likes PeroK
  • #30
A few satellites and space probes got lost due to software issues. Examples:
Mars climate orbiter had a missing conversion between imperial units and SI.
The four "Cluster" spacecraft s (designed to measure the magnetosphere of Earth) got lost due to an integer overflow in the rocket.
CryoSat (designed to monitor polar ice) got lost due an unspecified software bug in the rocket.
Galaxy X (whatever that was supposed to do) got lost due a software bug in the rocket controlling oscillations.
STEREO-B's problem (sun observation) is still unclear.
Various others failed for unknown reasons, and you cannot just go there and have a look...

Wikipedia has a list
 
Last edited:
  • #33
There was also the famous lottery scam perpetrated by some officials of the lottery allowing some relatives to "win" big" but not too big.

https://en.wikipedia.org/wiki/Hot_Lotto_fraud_scandal

http://www.nydailynews.com/news/national/lottery-fixing-scandal-spreads-nationwide-article-1.2470819

while not explicitly mentioned there had to have been some sort of malicious software involved:

http://www.engadget.com/2015/12/19/lotto-hack/

http://arstechnica.com/tech-policy/...ed-lottery-computers-to-score-winning-ticket/

Another story of when it pays to be a software "tester":

http://www.wired.com/2013/05/game-king/

Lastly, Numb3rs had a great episode (season 05 episode 15) of how some hacks could influence jury selection in favor of the defendant. While it was only a story, its a very plausible one especially as we rely of software to do all sorts of tasks we can never know how it will be hacked until it is.
 
  • Like
Likes mfb
  • #34
What is the point of posting lists of software failures? Obviously,there are also long lists of failures with non-software causes. What does that have to do with the OP?

I picture the case of software involved with delivery of the orders to begin global thermonuclear war. Surely that must have the most severe consequences of any possible failure. I'm sure there are both humans and machines in that loop, but nothing can ever be perfect.

Would you open source or close source it?
Should we trust it? If not,then what?
Should we distrust it? If not,then what?
At what point does adding more resources to perfect software (or anything) become counter productive?
 
  • #35
.Scott said:
With any system there are "stakeholders" who determine the requirements and audit the testing - and in many cases, fund the development process.
I don't think this is necessarily true. Some software can have a very informal development history. You would need to be very loose with the terminology to make that statement about all software.
 
  • Like
Likes RaulTheUCSCSlug
  • #36
FactChecker said:
I don't think this is necessarily true. Some software can have a very informal development history. You would need to be very loose with the terminology to make that statement about all software.
Indeed. My experience has been that the formal process is the exception rather than the rule.
 
  • Like
Likes M Saad
  • #37
stevendaryl said:
I didn't claim that open source would solve everything.
But seems to be that open source allows for the users to sometimes find these errors before running the program or may be able to fix it. So it does solve somethings, but open source can then be more buggy depending on the support from the company right?
 
  • #38
RaulTheUCSCSlug said:
open source allows for the users to sometimes find these errors before running the program or may be able to fix it.
\

Or for malicious users to use it to their advantage. I see no foundation to the presumption that open source volunteers all have good intentions.
 
  • #39
fluidistic said:
I wonder how can scientists trust closed source programs/softwares. How can they be sure there aren't bugs that return a wrong output every now and then?
As others said, they can't. Open-source software will have bugs too, and even in-house custom software developed for a large collaboration (like in particle physics) will have bugs. You just have to assume that these bugs are rare, and knowing that the bugs possibly exist, be on the lookout for possible problems.

RaulTheUCSCSlug said:
But seems to be that open source allows for the users to sometimes find these errors before running the program or may be able to fix it.
This might be true in principle, but few users, if any, are going to spend the time doing an extensive code review before using open-source software. If you run into strange behavior by some software, then you might go look into the code to see if there's something wrong. This kind of transparency is one of the main advantages of open-source software.

You may recall the bug in the Pentium. The problem wasn't so much that the bug existed. Any chip that complex is going to have bugs. It was Intel's not being transparent about the existence of the bug. Instead Professor Thomas R. Nicely had to waste a few months tracking down why his software was giving inconsistent results, only to discover Intel had already known about it.
 
  • Like
Likes jasonRF, M Saad and fluidistic
  • #40
I think the real strengths of open source are that you always have the option to make (or pay someone else to make) changes if you need, even if the main devs aren't interested or have gone bust, and that you can never be locked in by proprietary file or communication formats. That means that you can never find yourself with a piece of kit that you can't keep running anymore because of software issues. It might be expensive to take on code maintenance - but at least you have the option and aren't stuck trying to reverse engineer a proprietary black box.
 
  • Like
Likes M Saad
  • #41
FactChecker said:
I don't think this is necessarily true. Some software can have a very informal development history. You would need to be very loose with the terminology to make that statement about all software.
The original meaning of "hacking" was programming for the fun of programming. A Chech coworker of mine called it "happy engineering". And it is certainly possible for someone working out of their garage to create a useful product - as a solo effort.

I should have been clear that I was referring to more serious efforts - such as the question about an election system that I was responding to.
In general, the more complex the system and people are involved, the more needs to be written down.
 
  • Like
Likes M Saad
  • #42
.Scott said:
The original meaning of "hacking" was programming for the fun of programming. A Chech coworker of mine called it "happy engineering". And it is certainly possible for someone working out of their garage to create a useful product - as a solo effort.

I should have been clear that I was referring to more serious efforts - such as the question about an election system that I was responding to.
In general, the more complex the system and people are involved, the more needs to be written down.
A lot of code is initially developed informally. As the code evolves, it becomes larger and more useful. Then somebody wants to use it in a serious way and either doesn't know or doesn't care that it hasn't been fully validated. Unless there is enough time and money to refactor the code, it is likely to be used without a formal development process.
 
  • Like
Likes M Saad
  • #43
Even if the software is OK, it may be used incorrectly:
Review of the Use of Statistics in Infection and Immunity
"Typically, at least half of the published scientific articles that use statistical methods contain statistical errors. Common errors include failing to document the statistical methods used or using an improper method to test a statistical hypothesis.
...
The most common analysis errors are failure to adjust or account for multiple comparisons (27 studies), reporting a conclusion based on observation without conducting a statistical test (20 studies), and use of statistical tests that assume a normal distribution on data that follow a skewed distribution (at least 11 studies).
...
When variables are log transformed and analysis is performed on the transformed variables, the antilog of the result is often calculated to obtain the geometric mean. When the geometric mean is reported, it is not appropriate to report the antilog of the standard error of the mean of the logged data as a measure of variability.:wideeyed:
...
In summary, while most of the statistics reported in Infection and Immunity are fairly straightforward comparisons of treatment groups, even these simple comparisons are often analyzed or reported incorrectly. "​

Of course, physicists know better.​
 
  • Like
Likes M Saad, Dale and mfb
  • #44
FactChecker said:
A lot of code is initially developed informally. As the code evolves, it becomes larger and more useful. Then somebody wants to use it in a serious way and either doesn't know or doesn't care that it hasn't been fully validated. Unless there is enough time and money to refactor the code, it is likely to be used without a formal development process.
That would be an example of code that shouldn't be trusted - meaning, it shouldn't even be installed on a critical computer system. For example, most would consider Adobe Reader as non-critical software. What's the worse that could happen - it crashes and you can't read a document. But a few years ago it provided the entry point for a zero-day computer virus - a Trojan that used to install key-loggers and all sorts of other nasty things.
 
  • Like
Likes M Saad
  • #45
.Scott said:
That would be an example of code that shouldn't be trusted - meaning, it shouldn't even be installed on a critical computer system. For example, most would consider Adobe Reader as non-critical software.

What about the scientific calculator on the scientists desk; would you consider that critical? A wrong calculation could mislead the scientist.

Would you extend validation requirements down to the level of devices costing only a few dollars or a few pennies each, or would you trust certain manufacturers based only on their size and reputation?

Or perhaps you mean that trivial devices can't be critical?
 
  • #46
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases?.
The tests are repeated on every release. Actually, it tends to be even faster than than: every code commit. We use automated software to test the business logic of all of our software, unit tests to determine individual method accuracy, and we use external checkers to verify that not only the output was correct, but the algorithm used to make it.

Engineers test each other's code in a process called code review, that includes the tests, and they're really good at finding all of the weird cases. For example, if I were checking float add(float, float); I'd write tests for all combinations of adding: -inf, -2.5, -1, 0, 1, 2.5, inf, NaN, as well as checking two numbers that I know will overflow the float.
 
  • #47
anorlunda said:
What about the scientific calculator on the scientists desk; would you consider that critical? A wrong calculation could mislead the scientist.

Would you extend validation requirements down to the level of devices costing only a few dollars or a few pennies each, or would you trust certain manufacturers based only on their size and reputation?

Or perhaps you mean that trivial devices can't be critical?
First, let's talk about "critical". I mentioned "mission critical" before - and perhaps I abbreviated it as simply "critical". Generally, "mission critical" refers to components that must perform correctly in order to successfully complete a mission. And by mission, we are talking about thinks like allowing the LHC to work, allowing an Aircraft Carrier to navigate, allowing a Martian lander to explore (or allowing the Mars Climate Orbiter to orbit). Even if lives are not at stake (which they may), they involve major portions of hundreds of careers - or more.

Software development tools (including calculators) are certainly very important and need to be checked - and in some cases certified.

Physical calculator make for odd examples, because they are very unreliable. Not because they have programming defects - but because they rely on humans to key information in and transcribe the result back. For example, I would be astonished if critical LHC design issues were based on the results from desktop calculators.

On the other hand, a common spreadsheet program used in a common way on a trusted system is very reliable. With millions of global users exercising the application week after week - errors tend to found and corrected quickly. And, of course, the spread sheet program leaves an auditable artifact behind - the spreadsheet file.

Also, external calculations are not usually an Achilles heel. For example, calculations are often made in the development of test procedures - but a faulty computation would likely cause the program to fail and subsequent diagnostics would lead to the fault in the test.

Regarding manufacturers: Of course, it is certainly possible for a software tool manufacturer to be disqualified on the basis of reputation. But the focus is usually on the product - and the methods that the manufacturers uses to test and certify the tools - or the system developers ability to check the tool before committing to using it. For example, putting a Windows XP operating system in a mission critical system is pretty sketchy. But using a stripped down Windows XPe with the right test tools could make it a useful component in a system with other safeguards. But that wouldn't be good enough for a consumer automobile safety system - then you would need a certified compiler, certified operating system, etc.
 
  • #48
fluidistic said:
I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.
To be fair, systems like that do exist, and http://www.linux.com/news/enterprise/high-performance/147-high-performance/666669-94-percent-of-the-worlds-top-500-supercomputers-run-linux-/ to answer your question, on a massive scale, a lot of research is done on Opensource systems. However, to answer your question about trusting proprietary software, the math and physics required to do code these research (simulation) software is extremely complex and those coders are paid a ton to make sure it works right when you plug numbers into it. You still have to know what you are punching in though. Source: A solar physicist at my University.
 
Last edited by a moderator:
  • #49
.Scott said:
First, let's talk about "critical". I mentioned "mission critical" before - and perhaps I abbreviated it as simply "critical". Generally, "mission critical" refers to components that must perform correctly in order to successfully complete a mission. And by mission, we are talking about thinks like allowing the LHC to work, allowing an Aircraft Carrier to navigate, allowing a Martian lander to explore (or allowing the Mars Climate Orbiter to orbit). Even if lives are not at stake (which they may), they involve major portions of hundreds of careers - or more.

A lowly chip in a single security badge could enable a saboteur to bring all that crashing down. The cliché is "The bigger they are, the harder they fall."

I think you're being pretentious in dismissing my point that software is all around us, in the mundane and trivial, as well as in the grand and elaborate. It is extreme and foolish to pretend that the mundane and trivial carry no risk to the mission.

At the other extreme. I'm currently working on an article about power grid cyber security. On that subject, the public and the politicians believe that every mundane digital device owned by a power company could be hacked to bring the end of civilization.

Neither extreme is valid
.

If anyone wants to make sweeping general statements about software, they should encompass the whole universe of software.
 
  • #50
anorlunda said:
A lowly chip in a single security badge could enable a saboteur to bring all that crashing down. The cliché is "The bigger they are, the harder they fall."

I think you're being pretentious in dismissing my point that software is all around us, in the mundane and trivial, as well as in the grand and elaborate. It is extreme and foolish to pretend that the mundane and trivial carry no risk to the mission.
I'm not sure where I pretended that the mundane and trivial carried no risk. Perhaps you didn't catch my allusion to the mundane and trivial that costed a very significant Mars mission - or my mention of the Adobe zero-day issue.
anorlunda said:
At the other extreme. I'm currently working on an article about power grid cyber security. On that subject, the public and the politicians believe that every mundane digital device owned by a power company could be hacked to bring the end of civilization.
Actually, I think most people have no opinion on the matter. On the other hand, I have worked with SCADA systems and was not completely satisfied with how well they were protected. And I think the Iranians were not fully satisfied with how well their Uranium refinery SCADA system was protected.
 
Back
Top