What are the typical uncertainties of data in Astophysics?

Click For Summary

Discussion Overview

The discussion revolves around the uncertainties associated with data in astrophysics, particularly in the context of exoplanet detection using the radial velocity method. Participants explore the acceptable levels of uncertainty in astronomical data and seek resources to support their investigations.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • One participant notes encountering mass data with errors up to 300% and questions the typical uncertainties permitted in astronomical research.
  • Another participant suggests that statistical significance often requires a 95% confidence level, but acknowledges uncertainty about the specifics.
  • Some participants argue that acceptable uncertainty can vary widely, with some measurements being accurate to a few percent while others may only be reliable within an order of magnitude.
  • Concerns are raised about the reliability of data from catalogs, with one participant mentioning a rejected article due to perceived biases in the data.
  • There is discussion about the criteria for acceptable uncertainty, with one participant proposing a threshold of 20% but receiving pushback on the arbitrary nature of that choice.
  • Another participant emphasizes the importance of understanding the distinction between systematic errors and statistical errors, particularly in the context of data bias.
  • It is noted that measurement accuracy is influenced by the method used to acquire data, and that expertise is crucial for managing uncertainties.

Areas of Agreement / Disagreement

Participants express differing views on what constitutes acceptable uncertainty in astronomical data, with no consensus reached on specific thresholds or criteria. The discussion remains unresolved regarding the implications of data uncertainty on research validity.

Contextual Notes

Participants highlight the challenges of measuring uncertainties in astronomy due to the nature of observational methods, which differ from controlled experimental physics. The discussion reflects a range of perspectives on how to approach data reliability and the implications of uncertainty in research.

Fabioonier
Messages
12
Reaction score
0
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.
 
Astronomy news on Phys.org
Don't you need like like 95% (within 5%) to have statistical significance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.
Fabioonier said:
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.
EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance
 
Fabioonier said:
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.

It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.
 
p1l0t said:
Don't you need like like 95% (within 5%) to have statistical significance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance
Hi p1l0t. I guess what you are talking about is something that you define in the goodness fit test, but is not related to the uncertainties in data. What I need to know is what error percentage is acceptable in data for astronomy investigations.
Thanks for your help.
 
phyzguy said:
It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.
Thanks for your help.
 
Fabioonier said:
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.
Thanks for your help.

There is no hard and fast number. Why choose 20% and not 1% or 100%? In some cases 20% is excellent, in other cases it is way too loose. It all depends on what you are trying to do.
 
Dependa on the method used to acquire the data. Measurement accuracy is a crucial variable in astrometric data and the knowledge and discipline necessary to achieve high precision may only exist among very select individuals. People without special expertise in a particular method may not even be aware of factors that introduce uncertainty into the data. The best researchers take great pains to quantify and explain the uncertainty inherent to each measurement and what they did to manage it. Only then can the data be considered reliable. While this is incredibly boring for the average reader [and usually looks like gibberish], it is of vital interest to any credible referee during peer review. Non-experts are typically interested in conclusions, whereas the facts leading up to them are the real story.
 
Fabioonier said:
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased.

One other point. When the referee says the data may be biased, he/she is probably not referring to the statistical uncertainty in the measurements. You need to understand the difference between systematic errors and statistical errors. The word "bias" usually refers to systematic errors, meaning that the statistical error in the measurement may be small, but it is biased in a direction to be too small or too large.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 10 ·
Replies
10
Views
7K
  • · Replies 5 ·
Replies
5
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
6K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K