- 3,306
- 2,529
fluidistic said:I wonder how can scientists trust closed source programs/softwares.
How can they be sure there aren't bugs that return a wrong output every now and then? Assuming they use some kind of extensive tests that figures out whether the program behaves as it should, how can they be sure that the programs aren't going to suffer from bugs and stuff like that (malicious code included) in further releases? Are there any kind of extensive tests performed on software that are generally used in branches of physics or any other science involving data analysis? Blindly trusting a closed source program seems to go against scientific mindset to me.
My collaborators and I tend to wring out all our analysis software thoroughly, closed source, open source, and written in house.
One standard operating procedure is to use the code on a wide array of inputs with known outputs.
Another is to repeat the analysis independently with different codes. For example, one collaborator might use MS Excel for a spreadsheet analysis, while another uses the LibreOffice spreadsheet. Or one may use a commercial stats package, while another uses custom software written in C or R.
I've always preferred data approaches that store data in a raw form and then proceed with analysis from that point in a way that several different independent analysis paths are possible.
The whole "repeatability" thing in experimental science not only provides an important buffer to errors in the original experiments, it provides an important buffer against analysis errors.
Last edited: