A What are the typical uncertainties of data in Astophysics?

AI Thread Summary
In astrophysics, the acceptable level of uncertainty in data varies significantly depending on the measurement method and the specific context of the research. For exoplanet detection via the radial velocity method, large errors, such as 300%, can occur due to the tiny changes in stellar motion caused by planets. While some researchers suggest using data with uncertainties up to 20%, this threshold is subjective and should be based on the goals of the study. Understanding the distinction between statistical and systematic errors is crucial, as bias can affect the reliability of measurements even if statistical uncertainty appears low. Ultimately, researchers must carefully assess the data quality and the inherent uncertainties to defend their findings effectively.
Fabioonier
Messages
12
Reaction score
0
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.
 
Astronomy news on Phys.org
Don't you need like like 95% (within 5%) to have statistical signifigance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.
Fabioonier said:
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.
EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance
 
Fabioonier said:
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.

It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.
 
p1l0t said:
Don't you need like like 95% (within 5%) to have statistical signifigance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance
Hi p1l0t. I guess what you are talking about is something that you define in the goodness fit test, but is not related to the uncertainties in data. What I need to know is what error percentage is acceptable in data for astronomy investigations.
Thanks for your help.
 
phyzguy said:
It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.
Thanks for your help.
 
Fabioonier said:
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.
Thanks for your help.

There is no hard and fast number. Why choose 20% and not 1% or 100%? In some cases 20% is excellent, in other cases it is way too loose. It all depends on what you are trying to do.
 
Dependa on the method used to acquire the data. Measurement accuracy is a crucial variable in astrometric data and the knowledge and discipline necessary to achieve high precision may only exist among very select individuals. People without special expertise in a particular method may not even be aware of factors that introduce uncertainty into the data. The best researchers take great pains to quantify and explain the uncertainty inherent to each measurement and what they did to manage it. Only then can the data be considered reliable. While this is incredibly boring for the average reader [and usually looks like gibberish], it is of vital interest to any credible referee during peer review. Non-experts are typically interested in conclusions, whereas the facts leading up to them are the real story.
 
Fabioonier said:
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased.

One other point. When the referee says the data may be biased, he/she is probably not referring to the statistical uncertainty in the measurements. You need to understand the difference between systematic errors and statistical errors. The word "bias" usually refers to systematic errors, meaning that the statistical error in the measurement may be small, but it is biased in a direction to be too small or too large.
 
Back
Top