Does the use of error bars pertain to this experiment?

AI Thread Summary
The discussion revolves around the appropriateness of using error bars in an experiment that compares results from a real test setup and two computer models, yielding three distinct results: 2.70, 2.71, and 2.80. The main concern is whether the 2.80 result from the second model is "acceptable" in relation to the other two. It is noted that with only one result from each test, traditional statistical interpretations, including error bars, may not apply effectively. The conversation suggests considering the underlying assumptions of the data, as the two similar results may not represent random samples. Ultimately, the use of error bars in this context may not provide a mathematically sound basis for evaluating the acceptability of the 2.80 result.
hydronicengr
Messages
4
Reaction score
0
I'm currently performing a real bench experiment that would essentially validate the results of those generated in two different computer models.

Essentially, there is one (real) test setup, one computer model, and another (but different) computer model.

I have ONE result from each of the tests, so in total I have three numbers. The real test setup and one of the computer models are similar to each other at 2.70 and 2.71, respectively. The other computer model is 2.80.

I want to see if the last computer model result of 2.8 is "acceptable" or falls "within range" of the other results.

I was told to use error bars, but I'm not sure if this would be the best approach.
 
Physics news on Phys.org
hydronicengr said:
I have ONE result from each of the tests, so in total I have three numbers.
I want to see if the last computer model result of 2.8 is "acceptable" or falls "within range" of the other results.

I was told to use error bars, but I'm not sure if this would be the best approach.

If you have one result from each test and this result is not the average of many other replications of the test, then I can't think of any reasonable interpretation of your question which has a convincing mathematical answer.

Perhaps you have a sociological problem, not a mathematical problem. Are you preparing a report or briefing or article that will be evaluated by other people. If so, you should look at other reports or briefings that they liked and copy whatever methods were used.

Your request to determine whether 2.8 is "acceptable" doesn't define a specific mathematical problem. The best I can do to mind read what you want is to day that you want to assume that the number 2.70 and 2.71 are drawn independently from the same Normal distribution and you want to compute an estimate of the mean and standard deviation of this distribution. Then you want to draw a bar whose center is the estimated mean and whose width is a certain number of the estimated standard deviations. You can certainly do that. It isn't a mathematical proof of anything.

If two of the numbers come from deterministic models, it would require some verbal contortions to explain how they come to be viewed as "random" samples. I suppose you could claim that the decisions made in creating the simulations involve some random choices.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top