What are the typical uncertainties of data in Astophysics?

In summary: The uncertainties in data for astronomical research can be quite large, but it is still acceptable to use data with errors up to 20%. This is important because observational methods and instrumentation can introduce uncertainties into data that are larger than the errors in the data. However, in experimental physics it is possible to control the experiments so that uncertainties in data are reduced. This is an important distinction between observational astronomy and experimental physics.
  • #1
Fabioonier
12
0
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.
 
Astronomy news on Phys.org
  • #2
Don't you need like like 95% (within 5%) to have statistical signifigance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.
Fabioonier said:
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.
EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance
 
  • #3
Fabioonier said:
Hello, everybody.
I'm conducting an investigation in planetary sciences. Specifically in exoplanet detection by the radial velocity method and I'm stuck because I need to know how to defend the data that I'm using.
I'm using the data from http://exoplanet.eu/catalog/ and I have found some data with errors in the mass about 300%!
In astronomical researchs, what are the typical uncertainties (errors) permitted? and, what articles or books can I read in order to get arguments for defending my investigation with those data?
Thanks for your help.

It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.
 
  • #4
p1l0t said:
Don't you need like like 95% (within 5%) to have statistical signifigance? I've been out of school for awhile but I seem to remember like 5% chi being the goal usually.EDIT: I was probably thinking of the two-tailed test. Sorry for the wiki link and not an actual reference https://en.m.wikipedia.org/wiki/Statistical_significance
Hi p1l0t. I guess what you are talking about is something that you define in the goodness fit test, but is not related to the uncertainties in data. What I need to know is what error percentage is acceptable in data for astronomy investigations.
Thanks for your help.
 
  • #5
phyzguy said:
It depends. Some quantities are determined to within a few percent or less. Other times you are happy to get within an order of magnitude (meaning it might be 10 times smaller than what you measured or it might be 10 times larger). You need to study the data set you are using to understand what the uncertainties are. In measuring exoplanet masses by the radial velocity method, you are looking at very tiny changes in the motion of the parent star caused by the planet, which is much less massive. So it doesn't surprise me that the errors are large. You might say, "What good is a measurement with a 300% error?", but it gives you an idea whether the planet is like the Earth or like Jupiter.
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.
Thanks for your help.
 
  • #6
Fabioonier said:
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased. So, what are the uncertainty percentage that I could accept in the data which I use? I'm thinking of use only data with uncertainties until 20%. Is it a good criteria or in astronomy the error percentage in data is not so important because of the observation methods and instrument? What I mean is, in astronomy we can use all of the available data (no matter the uncertainty) because the are all what we can get?
This is an important question because in experimental physics I can control the experiments and repeat measurements in order to reduce the uncertainties in data. In astronomy you cannot do that.
Thanks for your help.

There is no hard and fast number. Why choose 20% and not 1% or 100%? In some cases 20% is excellent, in other cases it is way too loose. It all depends on what you are trying to do.
 
  • #7
Dependa on the method used to acquire the data. Measurement accuracy is a crucial variable in astrometric data and the knowledge and discipline necessary to achieve high precision may only exist among very select individuals. People without special expertise in a particular method may not even be aware of factors that introduce uncertainty into the data. The best researchers take great pains to quantify and explain the uncertainty inherent to each measurement and what they did to manage it. Only then can the data be considered reliable. While this is incredibly boring for the average reader [and usually looks like gibberish], it is of vital interest to any credible referee during peer review. Non-experts are typically interested in conclusions, whereas the facts leading up to them are the real story.
 
  • #8
Fabioonier said:
Hi, phyzguy.
I understand your point. I wrote an article and it was rejected because the referee said tha data in the catalogue may be very biased.

One other point. When the referee says the data may be biased, he/she is probably not referring to the statistical uncertainty in the measurements. You need to understand the difference between systematic errors and statistical errors. The word "bias" usually refers to systematic errors, meaning that the statistical error in the measurement may be small, but it is biased in a direction to be too small or too large.
 

1. What is the source of uncertainty in astrophysics data?

The main source of uncertainty in astrophysics data is the limitations of the instruments used to collect the data. These instruments have a finite resolution and sensitivity, which can introduce errors into the measurements.

2. How do scientists account for uncertainties in astrophysics data?

Scientists use statistical methods to estimate and account for uncertainties in astrophysics data. This involves analyzing multiple data points and determining the confidence interval or error bars for each measurement.

3. Can uncertainties in astrophysics data affect the accuracy of scientific conclusions?

Yes, uncertainties in astrophysics data can significantly affect the accuracy of scientific conclusions. The larger the uncertainty, the less certain we can be about our conclusions and the more cautious we must be in our interpretations.

4. Are there different types of uncertainties in astrophysics data?

Yes, there are two main types of uncertainties in astrophysics data: systematic and random. Systematic uncertainties are due to the limitations of the instruments or methods used, while random uncertainties arise from statistical fluctuations in the data.

5. How do scientists minimize uncertainties in astrophysics data?

Scientists use various techniques to minimize uncertainties in astrophysics data, such as improving the sensitivity and resolution of instruments, collecting more data points, and using advanced statistical methods. Collaborative studies and independent verification of results can also help reduce uncertainties.

Similar threads

  • Introductory Physics Homework Help
Replies
9
Views
1K
Replies
10
Views
7K
  • Astronomy and Astrophysics
Replies
9
Views
5K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
3
Views
2K
  • Introductory Physics Homework Help
Replies
5
Views
5K
  • Advanced Physics Homework Help
Replies
1
Views
2K
Replies
5
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
2K
Replies
1
Views
1K
Replies
2
Views
3K
Back
Top