Math possibly in need of all-open-source software for testin

  • Thread starter Nick Levinson
  • Start date
  • Tags
    Software
In summary, the individual is an outsider with limited knowledge about physics, but has observed that in this field, it is common for knowledgeable professors and peers to hold contradictory positions without any clear distinction. They question whether this may be due to flaws in the advanced and computer-intensive mathematics used in the field, which may also be hindered by closed-source code and limited transparency. The individual suggests that open-source software may offer more confidence and potential for identifying and addressing any potential errors. Ultimately, they raise the question of whether there are any unproven black boxes in computer math that could impact the accuracy of calculations in physics.
  • #1
Nick Levinson
42
4
I'm an outsider (and I don't know how to make this much more concise with the same content), but physics is the only field I know of in which the same very thoughtful and knowledgeable professor takes mutually exclusive positions simultaneously and without allowing even a fine distinction; and multiple peers do this. I've known one or more to say it's because the mathematics requires this result. We inevitably will have to eventually work out the contradictions and I wonder if the math as executed is one weak link, especially since the math is now so advanced and so computer-intensive as to probably be beyond being checked on paper. This capability, whether applied or not, is needed to get around the problem of computer program source code being closed. I understand math specialization has gotten to the point that when someone posed a solution to an unsolved problem it took four years for anyone else to evaluate and agree on it. I doubt whether today anyone around can do all of the raw math completely on paper in a way that any other mathematician with only moderate skill can read and follow, i.e., writing it less elegantly. They'd know how but there'd be too much math to be able to devote the time to writing it out, or to typing it into a set of plain text files. The scientist who can do that probably has other research to do, for which we are waiting. So, basically, there's no one.

The pure math is not my concern, but computer-ready math is often different because of computer limitations. For example, inevitably a formula must have a length limit in a computer but need not outside of a computer; so, if the length limit limits a particular formula, it must be replaced with multiple formulae and the multiple formulae must then be combined without a mathematical error. Another example is from the interaction of a math program with the rest of its computing environment such that errors might be introduced by the environment and have to be discovered, a fresh risk whenever hardware or software has a new version, and there are usually many associated software and hardware components that have separate versions and probably separate authorships. Even if a high-end math program completely contains all of its mathematical processing without handing any off to Microsoft Windows, thereby eliminating one set of inspection problems, other interactions are left. Under an IEEE standard, only 14-15 decimal places of precision are required, plenty for most money cases but likely not for black hole studies.

I gather physicists and other scientists almost entirely depend on computers to do the major calculations and, critically, that the main programs for the purposes are proprietary, with closed source code, and thus not completely transparent. The programs operate as a collection of black boxes. You can see your input and get the output, but exactly how input is transformed into output is hidden. You can check that individual functions with specific example inputs produce correct outputs, but I'm not sure you can test all of the functions using the methods that are required for proofs, i.e., methods in which examples are not probative enough but abstraction is required, or that you can test holistically and not just reductively, important if an error still hidden despite the examples tested gets compounded with another error as multiple black boxes are applied to one problem.

Doubtless the top computer-program firms have highly qualified mathematicians test and correct their work, but doubtless also that's limited by trade-secrecy and budget, a model that falls far short of the peer review models used for publication of original research in refereed journals and by the effect of publication after peer review, when anyone can read the journals and report a problem they find, even if the reporter lacks qualifications and is unpaid. With proprietary closed-source software and especially firmware, even a customer who paid for it is usually unable to examine it, because they usually don't know how to parse the code (especially code wired into a hardware chip) and perhaps (like with Windows) are legally barred from reverse-engineering, decompiling, or disassembling. Some software licenses even prohibit benchmarking, although I don't know if that applies to software in this context.

With open source software (such as Linux or FreeBSD), the source code is available to anyone and can be compiled or interpreted with your own compiler or interpreter on your own computer into the object code forming an executable program, so you know that the source code is the intended source code for a given program. Even the recent public debate over privacy due to revelations about the work of the National Security Agency (NSA) did not lead to much discussion that I could find on the security of SELinux, an NSA security enhancement package offered for Linux for anyone who wants to turn it on. Because SELinux is offered within the open-source paradigm, confidence is apparently maintained, even though SELinux alone reportedly fills over 100,000 lines of code. Writing good open-source software for this kind of math is a huge project, and were I allocating resources I would skimp on other features, such as by writing it for only one common desktop platform and leaving most user-interface design to add-ons by other people.

The key question: Could there be one or more black boxes in computer math that have not been fully proven with all relevant versions in context in public?

(I already read the threads at https://www.physicsforums.com/threads/mathematic-vs-maple.181000/ and https://www.physicsforums.com/threads/the-best-software-for-math-and-physics.685877/ and one of them acknowledges periodic bugs. My question is more abstract.)
 
Physics news on Phys.org
  • #2
Nick Levinson said:
physics is the only field I know of in which the same very thoughtful and knowledgeable professor takes mutually exclusive positions simultaneously and without allowing even a fine distinction; and multiple peers do this.
This is provocative and false, and additionally it is irrelevant to your main point. Why would you say it?
 
  • #3
If I'm wrong, I'm happy to be corrected, but being provocative is not wrong and the relevance is that (unless I'm wrong that contradictions exist) internal contradictions need to be ironed out and I thought of two areas that would give rise to them, this thread being about one of them. Thus, the thread is not about something that might be merely interesting but unnecessary or unimportant.

I'm a laic, but over years I've read various books by academic physicists (I prefer their authority over that of, say, popular science writers) and listened to them in radio interviews, although books are more reliable because I can pace my reading, unlike with radio, in which missing a nuance is easy. Example of a problem: An electron is said to occupy two, and, it is said, infinite, places at once (along the lines of the cat that is dead and alive), not just that it can go to either of two or infinite places but that it simultaneously does occupy both or all. That makes the law of thermodynamics that specifies that the total of mass and energy is constant superfluous, because the total would be infinite and the law would not serve a useful purpose warranting being taught and repeated in nonobscure sources (it would be enough to say that the total is infinite and convertibility is available and we don't say of numbers, which are infinite, that the total quantity thereof is constant, that, too, being superfluous to their being infinite).

My issue is not with one group of professors saying one thing and another saying another. That happens in, probably, most major fields. Disagreements can be over space (multiple groups of speakers) and/or over time (as minds change) and can also occur because speakers are from different fields (especially fields that are only somewhat different) with different underlying givens or different methodologies. I don't even have a problem with the same person taking contradictory views because I assume there is effort toward reconciliation where opportunity is found; it may be that at the moment that opportunity is not apparent. An analogy might be to the opening of a legal appeal at which the two sides' attorneys, both highly qualified and carefully prepared, state thoughtful but contradictory theories of the case at bar; the judge or judges will try to sort it out.

I've had to suspend disbelief a few too many times when authors say these multiple statements are true and it appears they can't all be. Because the authors are usually very intelligent specialists in the field, I assume they thought about more complications than they wrote about. But that still leaves something to be sorted out and what would be the next book doesn't seem to help. So I'm inquiring about something I think might contribute to the problem. I drew on computer science because I know something of it and there are various instances of someone in one specialty pointing out a flaw in another specialty and gaining acceptance for the finding, although it doesn't hapen often and perhaps I'm altogether wrong. So I asked.
 
  • #4
Nick Levinson said:
(unless I'm wrong that contradictions exist)
Yes, you are wrong. That is why I said provocative and false.

Your specific examples are wrong, but delving into the physics seems off topic for your main point. Your main point about open source code is reasonable, but you are surrounding it with so much provocative and false nonsense that it really destroys your persuasiveness and credibility.
 
Last edited:
  • Like
Likes QuantumQuest
  • #5
Take all the popular science writing with a grain of salt. Even experts like Stephen Hawking are not very precise in their popular science writing. You really shouldn't judge your idea of science by those standards. If you want to know science, then you'll need to study it rigorously. If you don't want to do that for whatever reason (and trust me, I find that very understandable!), then you shouldn't claim to know anything about it, let alone claim to have found contradictions.
 
  • Like
Likes CalcNerd, QuantumQuest and jim mcnamara
  • #6
@micromass:

Understandable, but that's an extreme position: be a full-time lifelong expert or a total moron. I do avoid popular science writers, as noted above, even some in Scientific American, but do read authors whose credentials are stated and academic, generally professor in the field, including two editions by Stephen Hawking. While a book by anyone may well have errors and I've seen some, multiple books by multiple authors should not have the same errors contradicting the subjects the authors know best. I have sometimes found errors in books and sometimes authors have agreed; and many times nothing was said. I may be wrong, but pointing to specific errors is much more informative than calling for knowing nothing, which is contrary to what many, and probably most, scientists want in the public.

Thank you for the offer. No, although it's also an interesting field. The math-relevant question in my opening post was not about a pure-math problem but about computer-contextual math.

@Dale:

Provocativeness is not the problem. That's legitimate. That is why scientists often provoke each other. They do it to get at content and I wasn't even trying to provoke, but to make a case for raising the issue that otherwise would be relatively trivial (there's rarely much point to asking specialists if they're careful about their math or their IT if there's no special reason behind asking).

Being wrong is definitely a problem, but so far you've given me only the conclusory statement (four times) that I'm wrong. If I am, then it appears that various books by academic physicists written for lay audiences are wrong, since that's where I've been getting most of my information on point. (I wish I could cite them now, but I usually didn't keep bibliographic information and page numbers.) Conclusory statements are widespread in life but, precisely because their bases are omitted and, when revealed, sometimes wrong and sometimes nonexistent, conclusory statements tend to lack credibility. On the other hand, I'm happy to have any of my points refuted, even sketchily. That would be legitimate and far more useful than simply repeating an empty deeming that I'm wrong without how or why.

It's also possible that many of the authors knew how to keep everything internally consistent but just focused their books on the surprises, and it's fairly plausible that one or another scientist would do that, but that's less likely with multiple authors of multiple sources from multiple publishers. So it appears that the contradictions are real.

The issue is on point. There's no reason to look at the computer systems doing the math if there's no math error or if the body of knowledge of physics is internally consistent so there's no reason to question inconsistencies and therefore the math. In that case, the main reasons for going to open source software are that it's free to acquire and freely modifiable by anyone; presumably, the reliability of math would not be better. Those other benefits are helpful for many kinds of usage but not why I got interested in the present physics question. The apparent contradictions are exactly what got me interested.

Indeed, using software math routines that a researcher modifies could make refereed publication harder, because an author would not only have to list sofware versions but also detail subsequent modifications and how they were vetted for the math, and that could add many pages to a research paper, pages which would be only marginally topical for the journal, which would decrease the chance of publication and, because of limited topical relevance, might be beyond the review abilities of a given journal's peers. That would argue against open source except for its low acquisition cost, something usually better left to budget managers than to outsiders.

You say my "examples" are wrong. I gave only one for physics, because it's the one that recently came to mind. Did you mean by the plural to say I'm wrong about how computers do math? Which computer method or methods might you be saying I'm wrong about? Or is one the infinite numbers case or the legal analogy? Analogies are often problematic, but I tried to pick one that isn't. Let me know which particular examples you find are wrong and let's figure things out from there.

Even if I'm totally wrong about everything but you see a good point about software, feel free to address the software issue. Don't worry about my personality. Let's concentrate on substance.

---

By the way, the topic title was auto-truncated, but not very critically. And I'm online now and then, discontinuously. Thanks for raising the questions.
 
  • #7
Nick Levinson said:
@micromass:

Understandable, but that's an extreme position: be a full-time lifelong expert or a total moron. I do avoid popular science writers, as noted above, even some in Scientific American, but do read authors whose credentials are stated and academic, generally professor in the field, including two editions by Stephen Hawking. While a book by anyone may well have errors and I've seen some, multiple books by multiple authors should not have the same errors contradicting the subjects the authors know best. I have sometimes found errors in books and sometimes authors have agreed; and many times nothing was said. I may be wrong, but pointing to specific errors is much more informative than calling for knowing nothing, which is contrary to what many, and probably most, scientists want in the public.
It is not an extreme position, it is exactly what it is. You can't get knowledge from popsci books, you just can't. At best you can get a bit of a feeling what it is about and you can enjoy the read. Anything technical or scientific you get from popsci books is likely to be nonsense since they don't tell you everything. The smartest thing to do (if you don't want to study it formally) is to acknowledge you're a total moron when it comes to science. That's called knowing your limits.

@Dale:If I am, then it appears that various books by academic physicists written for lay audiences are wrong, since that's where I've been getting most of my information on point.

Yes, they are wrong. They are imprecise and wrong. It's fun to read such books, but you need to realize that the information in there is most of the time incorrect.
 
  • #8
Nick Levinson said:
I'm an outsider (and I don't know how to make this much more concise with the same content), but physics is the only field I know of in which the same very thoughtful and knowledgeable professor takes mutually exclusive positions simultaneously and without allowing even a fine distinction; and multiple peers do this. I've known one or more to say it's because the mathematics requires this result. We inevitably will have to eventually work out the contradictions and I wonder if the math as executed is one weak link, especially since the math is now so advanced and so computer-intensive as to probably be beyond being checked on paper. This capability, whether applied or not, is needed to get around the problem of computer program source code being closed. I understand math specialization has gotten to the point that when someone posed a solution to an unsolved problem it took four years for anyone else to evaluate and agree on it. I doubt whether today anyone around can do all of the raw math completely on paper in a way that any other mathematician with only moderate skill can read and follow, i.e., writing it less elegantly. They'd know how but there'd be too much math to be able to devote the time to writing it out, or to typing it into a set of plain text files. The scientist who can do that probably has other research to do, for which we are waiting. So, basically, there's no one.

The pure math is not my concern, but computer-ready math is often different because of computer limitations. For example, inevitably a formula must have a length limit in a computer but need not outside of a computer; so, if the length limit limits a particular formula, it must be replaced with multiple formulae and the multiple formulae must then be combined without a mathematical error. Another example is from the interaction of a math program with the rest of its computing environment such that errors might be introduced by the environment and have to be discovered, a fresh risk whenever hardware or software has a new version, and there are usually many associated software and hardware components that have separate versions and probably separate authorships. Even if a high-end math program completely contains all of its mathematical processing without handing any off to Microsoft Windows, thereby eliminating one set of inspection problems, other interactions are left. Under an IEEE standard, only 14-15 decimal places of precision are required, plenty for most money cases but likely not for black hole studies.

I gather physicists and other scientists almost entirely depend on computers to do the major calculations and, critically, that the main programs for the purposes are proprietary, with closed source code, and thus not completely transparent. The programs operate as a collection of black boxes. You can see your input and get the output, but exactly how input is transformed into output is hidden. You can check that individual functions with specific example inputs produce correct outputs, but I'm not sure you can test all of the functions using the methods that are required for proofs, i.e., methods in which examples are not probative enough but abstraction is required, or that you can test holistically and not just reductively, important if an error still hidden despite the examples tested gets compounded with another error as multiple black boxes are applied to one problem.

Doubtless the top computer-program firms have highly qualified mathematicians test and correct their work, but doubtless also that's limited by trade-secrecy and budget, a model that falls far short of the peer review models used for publication of original research in refereed journals and by the effect of publication after peer review, when anyone can read the journals and report a problem they find, even if the reporter lacks qualifications and is unpaid. With proprietary closed-source software and especially firmware, even a customer who paid for it is usually unable to examine it, because they usually don't know how to parse the code (especially code wired into a hardware chip) and perhaps (like with Windows) are legally barred from reverse-engineering, decompiling, or disassembling. Some software licenses even prohibit benchmarking, although I don't know if that applies to software in this context.

With open source software (such as Linux or FreeBSD), the source code is available to anyone and can be compiled or interpreted with your own compiler or interpreter on your own computer into the object code forming an executable program, so you know that the source code is the intended source code for a given program. Even the recent public debate over privacy due to revelations about the work of the National Security Agency (NSA) did not lead to much discussion that I could find on the security of SELinux, an NSA security enhancement package offered for Linux for anyone who wants to turn it on. Because SELinux is offered within the open-source paradigm, confidence is apparently maintained, even though SELinux alone reportedly fills over 100,000 lines of code. Writing good open-source software for this kind of math is a huge project, and were I allocating resources I would skimp on other features, such as by writing it for only one common desktop platform and leaving most user-interface design to add-ons by other people.

The key question: Could there be one or more black boxes in computer math that have not been fully proven with all relevant versions in context in public?

(I already read the threads at https://www.physicsforums.com/threads/mathematic-vs-maple.181000/ and https://www.physicsforums.com/threads/the-best-software-for-math-and-physics.685877/ and one of them acknowledges periodic bugs. My question is more abstract.)

For the real science part, I have nothing to add as Dale and micromass have covered everything I would say.

I don't want to be offensive in any way, but it seems like you adopt the view, that some people suddenly jumped off science, in order to reveal some conspiracy that was taking place i.e scientists not describing the truth to the public in a simple, layman way, although they always could. I'm obviously talking about popular science. But it can't be like this and there are many ways for someone to justify that. First and foremost, the role of PopSci is not at all to make someone a scientist. It is intended to give some qualitative outline of something, that in many cases ends up being merely descriptive. From there, there are just two ways: go the hard way of studying some formal texts, do the math and / or taking some credible formal online classes, with all the material and efforts that this entails or go the "easy" way, of getting some impression of something and then fill in the gaps, the way you think best. It is more than obvious that in this second way, the gaps are filled by and large with other qualitative arguments and so on and essentially, this is no different than trying to have a good mathematical numerical approximation, but in the process, you don't care about how many digits you are off at each stage. How good this approximation would be? Even reading PopSci books, you follow this same path, as long as you don't want to spend your time for a formal quantitative analysis. Second if there was such a simple popular way to describe concepts that are very involved, then that would mean that millions of scientists for so many years, were just losing their time, a thing that can't even statistically hold and is in full discordance to reality in general.

What PopSci can serve well, is to make some things more known to the general public and wet someone's appetite for learning a subject further. Beyond this, your learning is directly dependent of what efforts you put in.

For software - as I am in this field, I can tell you that "black boxes" are an inevitable thing in proprietary software, but someone must see the other side in a fair and unbiased manner too. A company that makes a proprietary piece of software, is not making it out of thin air. It invests lots of bucks in many things, not the least to pay top - notch programmers and developers of all kinds. So what would be the revenue, if the black boxes were not black? Everyone with some knowledge in the related fields, would copy and modify the whole thing and sell it as its own product. Is this even in an elementary way, reasonable? On the other hand, most of these companies give support to their customers / users, regarding the usability of their product and as far as I can tell, in most cases, they do a good job. On the opposite side, open source code and software, have gained a lot of acceptance, adoption, fans and users worldwide. From my 12 years experience as an active member in open-source projects, I can tell you that there is no silver bullet in software, as for any field or endeavor of the human life. The code is free (regarding cost) to use and free to modify, but there are many risks involved in this, that may not be all apparent to someone out of this field. Does this mean that open source software is more bad than good? No way. But the average user, needs some knowledge and caution in what exactly he / she does.

I talked about software in general, but this covers math software as well. Now, what a scientist could choose is a thing that has many factors involved. There is very well written software on both sides.
 
  • Like
Likes Dale
  • #9
Popsci is like the CSI series. They're both fun. They both make you think you're doing the real think. They're both lightyears away from the real thing.
 
  • Like
Likes QuantumQuest
  • #10
@micromass:

If you think that Stephen Hawking, Brian Greene, and Richard Feynman (I read at least one by each plus some others) are usually wrong even in their books written for the general public, I think you'll find you're in a small minority among people who know physics well and you might want to start a campaign to get them to write far more accurately, comparable to the recent campaign by a Harvard professor to reveal the bad judgment of predatory publishers (he does things like submit nonsensical studies that they then publish, such as one purporting to find that eating chocolate causes weight loss, but in which the survey data, although real, showed no such thing). If you're not conflating too many subgenres meant for the general public (I think you probably are but you're possibly implying you're not), you should publish a study of lay books written by leading physicists or mathematicians. Sometimes I read Scientific American, although mainly articles by authors who are specialists in the fields they're writing about, not the popular writers in those pages. If SciAm is usually wrong in most of their content, you should publish a paper based on randomly selecting and largely debunking most articles in recent years. Have such studies been done and replicated over recent years? The key is not whether errors exist but whether they are of most of the content and overlap. If you're right, a professor of physics could almost never have students who met prerequisites, because high school textbooks are generally worse (Feynman wrote about reviewing texts for Los Angeles schools although I don't remember what year level) and students who learned from most high school texts would have to be refused admission.

My standards when selecting books are higher and I don't watch TV or go to the movies, so I don't see CSI and I stay away from Hollywood. As noted above, I'm not talking about books written mainly by science writers (is that what you mean by popsci?) but books for which the primary authors are scientists in the same field as the subject of the books. Doubtless every one of those books and articles have an error here or there, but the errors are not likely to be so numerous as to repeat from book to book. Nonetheless, a few high-end books have many errors. I read one by Alfred North Whitehead and I later read that he didn't proofread what the publisher sent him (I don't know which titles) and posthumously many corrections were being made by his colleagues, but the evidence is that most highly-qualified authors do proofread or have their own people, perhaps their students, proofread for them.

If you personally limit your reading to refereed journals, conference papers, and the like, that's great, but most of us, like most physics students, need books that cover well the consensus on science on which the journals build, and society needs most of us to have a working knowledge in many fields, including physics and math. If I were to believe that I know nothing about a subject unless I know the content of almost all of the peer-reviewed literature, I wouldn't be able to take a subway or buy groceries without dying in a month or so. Unless you're singling me out and thus limiting the damage of your approach to knowledge, that approach is not workable and is dangerously counterproductive. As to knowing my limits, this thread alone acknowledged that several times and I do so many times, but I've gotten people in several fields annoyed when they discovered I was right all along. I wouldn't have been useful if I feigned being a moron and, generally, I don't think anyone else should do that. Sometimes I don't apply for a job because I don't have the requisite knowledge base, but to take a nearly solipsist view of knowledge is going too far.

No one has identified a specific statement or "example" of mine that was wrong. One might be. If claiming that all are would be an extraordinary claim needing extraordinary proof.

I'm still curious about whether any black boxes (supra) are inadequately validated in public.

@QuantumQuest:

Seeing a contradiction and saying it looks like an error is not finding a conspiracy to hide a truth. When Einstein made his "biggest blunder", I didn't take that (retroactively) as a deliberate concealment by him. That's a straw argument; it's not from me in this thread and I don't subscribe to such a thesis.

That numbers are often left out of books for the general public is true (Hawking said as much), but usually when you ask several people to explain basic concepts of a subject they don't all omit the same ones.

Yes, proprietary software has a point to it, which you touch on well. But usability is not the issue and math accuracy is. What you say about open source overlaps what I already said in my last post before yours. If people outside of the computer firms can't examine the black boxes and if scientists make errors in their calculations, that is an argument for the open source business model. When Microsoft Excel was erroneous a whole bunch of decimal places in, it may not matter for the real-world tasks given to Excel, but when someone is doing research on the universe for which cross-checks are probably limited to multiple computer systems and some idea of common sense but nothing much is empirical and white boxes are unavailable, a question is legitimate. That white boxes are bad for business is true but that's not an adequate answer to the problem I presented. That has not been refuted.

Note that I asked which "examples" of mine were wrong and that has not been answered. Broad-brush critiques that overreach have a credibility problem that undermines the whole critique unless someone makes a distinction, and that has not been forthcoming yet. Hopefully, that will change.
 
  • #11
Nick Levinson said:
Broad-brush critiques that overreach have a credibility problem that undermines the whole critique unless someone makes a distinction, and that has not been forthcoming yet. Hopefully, that will change.

Do you see a such critique in my post? On the other hand I would say that you don't talk specifically either and by what you write, I conclude that you already have a specific view / opinion, that you're not willing to change even with the best arguments.
 
  • #12
You agreed with the others on the "real science", so you incorporated a broad-brush critique by reference. The main dispute is probably about what I said were two contradictory views held by physicists, and the critique did not say that the first (as I stated and sequenced it) was wrong or that the second was wrong or that both were wrong, but said only that the mutual exclusivity was "false", that the existence of contradictions was "false", that "your [my] specific examples are wrong", and that I'm supplying "false nonsense". Then comes a claim, with which you agree ("Dale and micromass have covered everything I would say"), that, e.g., I should not know that electromagnetism and gravity exist and I should be asking people whether apples fall and whether time is a dimension and then I should ask them again (i.e., that being morons is what almost all of us need to be). Those are broad-brush attacks. On the other hand, I did "talk specifically" in my first reply post (post #3, supra), in the second paragraph. The replies to me could hardly have been the "best arguments", even granted this is a forum requiring brevity. I'm certainly willing to change my mind but simply getting "false" and "false" and so on is not refutation. (Italicizations omitted.)
 
  • #13
Nick Levinson said:
Provocativeness is not the problem.
It is a problem if you intend to have a productive conversation or if you intend to persuade others.

For example, I am a heavy user of both open source scientific software like R and also commercial scientific software like Mathematica, and I have used both extensively in peer-reviewed publications in the medical imaging field. I probably would have something relevant to say, but instead I am focused on the extraneous, provocative, and wrong statements you have made.

Nick Levinson said:
If I am, then it appears that various books by academic physicists written for lay audiences are wrong, since that's where I've been getting most of my information on point.
Yes, they are wrong. We spend an inordinate amount of time on PF correcting misconceptions that arise from their books. Brian Greene is particularly notorious for this. His peer-reviewed work is fine, but his pop-science work tends to teach people incorrect concepts very frequently. That is why the standard on this forum is the professional scientific literature itself, and not just the pop-science literature written by professional scientists.

Nick Levinson said:
The issue is on point.
By off-topic I mean that it doesn't fit on this sub-forum. You should ask your questions about quantum mechanics in the QM Forum and your questions about thermodynamics in the Classical Physics Forum. If you are reading pop-sci books then you probably also have misunderstandings of relativity which you should ask in the Relativity Forum. This forum is for discussing Math Software, so the QM and Relativity experts will not see such questions here. I am not saying that you shouldn't ask those questions, just that you shouldn't do it here because you won't get the right people to see and answer. This section is dedicated to a different purpose.

Nick Levinson said:
You say my "examples" are wrong. I gave only one for physics, because it's the one that recently came to mind. Did you mean by the plural to say I'm wrong about how computers do math?
By the plural I was referring to your comment about an electron's position which was not a contradiction and also your comment relating that to thermodynamics which was simply wrong.

Please decide if you want to discuss the software topic, in which case I think we need a new more focused thread, or if you want to discuss the physics topics, in which case you need to start threads in the appropriate physics section. This thread is not productive and is closed.
 

1. What is "all-open-source software" and why is it important for testing in math?

"All-open-source software" refers to software that is freely available and can be modified and redistributed by anyone. In the field of math, this type of software is crucial for testing because it allows for transparency and reproducibility of results. It also promotes collaboration and innovation among researchers.

2. What are the benefits of using all-open-source software for testing in math?

Some benefits of using all-open-source software for testing in math include cost-effectiveness, access to a wide range of tools and resources, and the ability to customize and improve the software based on individual needs and preferences. It also promotes open science and encourages the sharing of knowledge and findings.

3. Are there any potential drawbacks to using all-open-source software for testing in math?

One potential drawback is the learning curve associated with using new software. Some researchers may be more comfortable with traditional, proprietary software and may need to invest time in learning how to use all-open-source software effectively. Additionally, there may be a lack of technical support for these types of software compared to commercial options.

4. How can all-open-source software be integrated into the testing process in math?

All-open-source software can be integrated into the testing process in math by using it for data analysis, simulation, and visualization. It can also be used for programming and coding, as well as for creating and sharing mathematical models and algorithms. Many all-open-source software also have features specifically designed for testing and analysis in math.

5. Are there any notable examples of all-open-source software being used for testing in math?

Yes, there are several notable examples, including R, Python, and Octave for data analysis, SageMath for mathematical modeling, and LaTeX for typesetting mathematical equations. These software are widely used in the scientific community and have a large and active user base, making them reliable options for testing in math.

Similar threads

  • MATLAB, Maple, Mathematica, LaTeX
Replies
17
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
2K
  • Programming and Computer Science
Replies
29
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
6K
  • General Math
Replies
3
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
6
Views
5K
  • Programming and Computer Science
Replies
6
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
3
Views
2K
Back
Top