# A What are the typical uncertainties of data in Astophysics?

Tags:
1. May 10, 2018

### Fabioonier

Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!!!!!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?

2. May 10, 2018

### p1l0t

Don't you need like like 95% (within 5%) to have statistical signifigance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.

EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance

3. May 10, 2018

### phyzguy

It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.

4. May 11, 2018

### Fabioonier

Hi p1l0t. I guess what you are talking about is something that you define in the goodness fit test, but is not related to the uncertainties in data. What I need to know is what error percentage is acceptable in data for astronomy investigations.

5. May 11, 2018

### Fabioonier

Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.

6. May 11, 2018

### phyzguy

There is no hard and fast number. Why choose 20% and not 1% or 100%? In some cases 20% is excellent, in other cases it is way too loose. It all depends on what you are trying to do.

7. May 11, 2018

### Chronos

Dependa on the method used to acquire the data. Measurement accuracy is a crucial variable in astrometric data and the knowledge and discipline necessary to achieve high precision may only exist among very select individuals. People without special expertise in a particular method may not even be aware of factors that introduce uncertainty into the data. The best researchers take great pains to quantify and explain the uncertainty inherent to each measurement and what they did to manage it. Only then can the data be considered reliable. While this is incredibly boring for the average reader [and usually looks like gibberish], it is of vital interest to any credible referee during peer review. Non-experts are typically interested in conclusions, whereas the facts leading up to them are the real story.

8. May 11, 2018

### phyzguy

One other point. When the referee says the data may be biased, he/she is probably not referring to the statistical uncertainty in the measurements. You need to understand the difference between systematic errors and statistical errors. The word "bias" usually refers to systematic errors, meaning that the statistical error in the measurement may be small, but it is biased in a direction to be too small or too large.