Other Do theoretical physicists use significant figures?

AI Thread Summary
The discussion centers on the use of significant figures and uncertainty representation in theoretical physics and mathematical physics. It confirms that physicists do consider significant figures, but in practice, they often rely on more sophisticated methods of uncertainty analysis, such as error analysis and statistical techniques, rather than strict adherence to sig fig rules. The conversation highlights that successful physicists value learning concepts even without immediate application, emphasizing the importance of understanding uncertainty in calculations. Participants note that uncertainty is typically expressed as a percentage or absolute value, rather than through significant figures, and that uncertainty analysis can be both an art and a science, influenced by various factors and assumptions. Overall, the thread underscores the complexity of representing uncertainty in scientific work and the evolving nature of these practices in the field.
Mikaelochi
Messages
40
Reaction score
1
I know the phrase ''theoretical physicist'' may be a general term, do they or ''mathematical physicists'' use significant figures? I know it's kind of a silly question, but it stems from what I learned in chemistry class. Our chemistry teacher talked about the importance of such things and I understood pretty well. But I plan to work in high energy theory in the future. So would I be using this concept frequently in my calculations or is it a useful thing to know but won't really use it? Thanks.
 
Physics news on Phys.org
First, yes they do. They would look like idiots if they said "I predict the range of this parameter is between 1000001 and 4999999.

Second, the attitude of not wanting to learn something unless there is an immediate and obvious use is not one that successful physicists share. "When are we going to use this?" is very middle school.
 
  • Like
Likes Evo, Dr. Courtney, gleem and 4 others
Vanadium 50 said:
First, yes they do. They would look like idiots if they said "I predict the range of this parameter is between 1000001 and 4999999.

Second, the attitude of not wanting to learn something unless there is an immediate and obvious use is not one that successful physicists share. "When are we going to use this?" is very middle school.
No, I just wanted to know when it was used. For example, Riemannian geometry has its usefulness in general relativity, which is seems quite interesting may I add. But I never heard of significant figures being at the forefront. Perhaps it's just the notation. Like would they typically use the plus or minus symbol to represent uncertainty or use the sig figs rules. Basically what's the conventional way that theoretical physicists represent uncertainty? That's what I'm trying to ask, if I didn't make it clear.
 
Instead of using these sig figs rules, is there a simpler way to represent the uncertainty? Maybe I could say this calculation has a 0.1% uncertainty or something like that.
 
By the way, it's not that I don't want to learn it, it's just that if I'm going to give an answer, the point is that it's accurate as possible. I would prefer to have good justification for rounding rather than base it off some assumption.
 
Mikaelochi said:
Instead of using these sig figs rules, is there a simpler way to represent the uncertainty? Maybe I could say this calculation has a 0.1% uncertainty or something like that.
Indeed, in real scientific work, the "sig fig" rules are not used. The lowest level of uncertainty analysis ("error analysis") that is commonly used is described here:

https://en.wikipedia.org/wiki/Propagation_of_uncertainty

You can decide for yourself whether this is "simpler" than the "sig fig" rules. :-p

Even this approach is often not sufficient, because it makes assumptions about the probability distributions involved which are not always met in practice, especially with small data samples.
 
  • Like
Likes CalcNerd
As someone in theoretical physics (not high energy but we use some of the same techniques), I can tell you than in my recent projects everything was done analytically. So it depends on the area. But some people may do simulations or compare work to experiments, so it depends on what you are trying to do.
 
  • Like
Likes Mikaelochi
Often when you see a range for an estimated value, it is because there is some statistical analysis giving the answer. The results often come from a tremendous number of random events that are used to narrow down the estimate as more data comes in. So the range of values comes from probability and statistics. Speaking of that in terms of "significant figures" is somewhat misleading. You should look at statistics and confidence intervals if you are not already familiar with it.
 
  • Like
Likes Mikaelochi
jtbell said:
Indeed, in real scientific work, the "sig fig" rules are not used. The lowest level of uncertainty analysis ("error analysis") that is commonly used is described here:

https://en.wikipedia.org/wiki/Propagation_of_uncertainty

You can decide for yourself whether this is "simpler" than the "sig fig" rules. :-p

Even this approach is often not sufficient, because it makes assumptions about the probability distributions involved which are not always met in practice, especially with small data samples.

I was never a fan of significant figures, preferring to include an estimate of the uncertainty either as a percent (relative) or absolute, along with the assumptions or methods used to generate that estimate.

Colleagues and I often consider a lot of different factors when choosing which kind of uncertainty estimate to include in published papers. We prefer to choose a method that is widely known for well communicated simplicity, yet one that is close to and representative of the likely possibilities including lesser known, but often more appropriate techniques, as well as our consensus "gut feeling" about what the uncertainties really are.

In most fields, uncertainty analysis is as much an art as a science, since there are so many ways to estimate uncertainties based on different views of the data and assumptions. I am of a mind to believe if one computes an uncertainty five different ways, that the true uncertainty is likely bounded by the different approaches, but I'm never confident ascribing a confidence level to that. In spite of the popularity of p-values, I think they are silly to take too seriously, just one possible metric among many.
 
  • Like
Likes Mikaelochi
  • #10
Thank you very much for the responses. Very helpful!
 

Similar threads

Back
Top