- #1
Nick Levinson
- 42
- 4
I'm an outsider (and I don't know how to make this much more concise with the same content), but physics is the only field I know of in which the same very thoughtful and knowledgeable professor takes mutually exclusive positions simultaneously and without allowing even a fine distinction; and multiple peers do this. I've known one or more to say it's because the mathematics requires this result. We inevitably will have to eventually work out the contradictions and I wonder if the math as executed is one weak link, especially since the math is now so advanced and so computer-intensive as to probably be beyond being checked on paper. This capability, whether applied or not, is needed to get around the problem of computer program source code being closed. I understand math specialization has gotten to the point that when someone posed a solution to an unsolved problem it took four years for anyone else to evaluate and agree on it. I doubt whether today anyone around can do all of the raw math completely on paper in a way that any other mathematician with only moderate skill can read and follow, i.e., writing it less elegantly. They'd know how but there'd be too much math to be able to devote the time to writing it out, or to typing it into a set of plain text files. The scientist who can do that probably has other research to do, for which we are waiting. So, basically, there's no one.
The pure math is not my concern, but computer-ready math is often different because of computer limitations. For example, inevitably a formula must have a length limit in a computer but need not outside of a computer; so, if the length limit limits a particular formula, it must be replaced with multiple formulae and the multiple formulae must then be combined without a mathematical error. Another example is from the interaction of a math program with the rest of its computing environment such that errors might be introduced by the environment and have to be discovered, a fresh risk whenever hardware or software has a new version, and there are usually many associated software and hardware components that have separate versions and probably separate authorships. Even if a high-end math program completely contains all of its mathematical processing without handing any off to Microsoft Windows, thereby eliminating one set of inspection problems, other interactions are left. Under an IEEE standard, only 14-15 decimal places of precision are required, plenty for most money cases but likely not for black hole studies.
I gather physicists and other scientists almost entirely depend on computers to do the major calculations and, critically, that the main programs for the purposes are proprietary, with closed source code, and thus not completely transparent. The programs operate as a collection of black boxes. You can see your input and get the output, but exactly how input is transformed into output is hidden. You can check that individual functions with specific example inputs produce correct outputs, but I'm not sure you can test all of the functions using the methods that are required for proofs, i.e., methods in which examples are not probative enough but abstraction is required, or that you can test holistically and not just reductively, important if an error still hidden despite the examples tested gets compounded with another error as multiple black boxes are applied to one problem.
Doubtless the top computer-program firms have highly qualified mathematicians test and correct their work, but doubtless also that's limited by trade-secrecy and budget, a model that falls far short of the peer review models used for publication of original research in refereed journals and by the effect of publication after peer review, when anyone can read the journals and report a problem they find, even if the reporter lacks qualifications and is unpaid. With proprietary closed-source software and especially firmware, even a customer who paid for it is usually unable to examine it, because they usually don't know how to parse the code (especially code wired into a hardware chip) and perhaps (like with Windows) are legally barred from reverse-engineering, decompiling, or disassembling. Some software licenses even prohibit benchmarking, although I don't know if that applies to software in this context.
With open source software (such as Linux or FreeBSD), the source code is available to anyone and can be compiled or interpreted with your own compiler or interpreter on your own computer into the object code forming an executable program, so you know that the source code is the intended source code for a given program. Even the recent public debate over privacy due to revelations about the work of the National Security Agency (NSA) did not lead to much discussion that I could find on the security of SELinux, an NSA security enhancement package offered for Linux for anyone who wants to turn it on. Because SELinux is offered within the open-source paradigm, confidence is apparently maintained, even though SELinux alone reportedly fills over 100,000 lines of code. Writing good open-source software for this kind of math is a huge project, and were I allocating resources I would skimp on other features, such as by writing it for only one common desktop platform and leaving most user-interface design to add-ons by other people.
The key question: Could there be one or more black boxes in computer math that have not been fully proven with all relevant versions in context in public?
(I already read the threads at https://www.physicsforums.com/threads/mathematic-vs-maple.181000/ and https://www.physicsforums.com/threads/the-best-software-for-math-and-physics.685877/ and one of them acknowledges periodic bugs. My question is more abstract.)
The pure math is not my concern, but computer-ready math is often different because of computer limitations. For example, inevitably a formula must have a length limit in a computer but need not outside of a computer; so, if the length limit limits a particular formula, it must be replaced with multiple formulae and the multiple formulae must then be combined without a mathematical error. Another example is from the interaction of a math program with the rest of its computing environment such that errors might be introduced by the environment and have to be discovered, a fresh risk whenever hardware or software has a new version, and there are usually many associated software and hardware components that have separate versions and probably separate authorships. Even if a high-end math program completely contains all of its mathematical processing without handing any off to Microsoft Windows, thereby eliminating one set of inspection problems, other interactions are left. Under an IEEE standard, only 14-15 decimal places of precision are required, plenty for most money cases but likely not for black hole studies.
I gather physicists and other scientists almost entirely depend on computers to do the major calculations and, critically, that the main programs for the purposes are proprietary, with closed source code, and thus not completely transparent. The programs operate as a collection of black boxes. You can see your input and get the output, but exactly how input is transformed into output is hidden. You can check that individual functions with specific example inputs produce correct outputs, but I'm not sure you can test all of the functions using the methods that are required for proofs, i.e., methods in which examples are not probative enough but abstraction is required, or that you can test holistically and not just reductively, important if an error still hidden despite the examples tested gets compounded with another error as multiple black boxes are applied to one problem.
Doubtless the top computer-program firms have highly qualified mathematicians test and correct their work, but doubtless also that's limited by trade-secrecy and budget, a model that falls far short of the peer review models used for publication of original research in refereed journals and by the effect of publication after peer review, when anyone can read the journals and report a problem they find, even if the reporter lacks qualifications and is unpaid. With proprietary closed-source software and especially firmware, even a customer who paid for it is usually unable to examine it, because they usually don't know how to parse the code (especially code wired into a hardware chip) and perhaps (like with Windows) are legally barred from reverse-engineering, decompiling, or disassembling. Some software licenses even prohibit benchmarking, although I don't know if that applies to software in this context.
With open source software (such as Linux or FreeBSD), the source code is available to anyone and can be compiled or interpreted with your own compiler or interpreter on your own computer into the object code forming an executable program, so you know that the source code is the intended source code for a given program. Even the recent public debate over privacy due to revelations about the work of the National Security Agency (NSA) did not lead to much discussion that I could find on the security of SELinux, an NSA security enhancement package offered for Linux for anyone who wants to turn it on. Because SELinux is offered within the open-source paradigm, confidence is apparently maintained, even though SELinux alone reportedly fills over 100,000 lines of code. Writing good open-source software for this kind of math is a huge project, and were I allocating resources I would skimp on other features, such as by writing it for only one common desktop platform and leaving most user-interface design to add-ons by other people.
The key question: Could there be one or more black boxes in computer math that have not been fully proven with all relevant versions in context in public?
(I already read the threads at https://www.physicsforums.com/threads/mathematic-vs-maple.181000/ and https://www.physicsforums.com/threads/the-best-software-for-math-and-physics.685877/ and one of them acknowledges periodic bugs. My question is more abstract.)