is it acceptable to publish experimental results that are at one standard deviation, for conformation of predictions?
Why? Have you been rejected for publication with a note about your error analysis?
What is acceptable for publication is whatever the publisher says is acceptable.
If your predicted result is within 1sd of the mean of your experimental results, then the experiment results would support the model the prediction came from - or any other model that gave similar numbers.
So that would normally be quite acceptable.
It is not normally acceptable to reject the model based on a prediction >1sd from the experimental mean.
Science focuses on rejecting models, not accepting them.
I'm not entirely sure how the calculation is done, except that the standard deviation is the variance of the data around the sample mean. But if you doubled the standard deviation do you risk including results not otherwise obtained in the experiment?
The standard deviation is the square-root of the varience - is called the "statistical error".
In Poisson statistics, for large numbers of counts, the error on the number of counts in a time interval is the square-root of the number you got. For small counts, you have to go to the equations.
As for the rest: google for "hypothesis testing".
The hypothesis being tested is, loosely, that the physical model which gave rise to the prediction "works". The null-hypothesis is that it doesn't work.
Rejecting the hypothesis is the same as rejecting the model.
If the model predicts a result that turns out to be >2sd from the experimental mean, then you can reject the model with 95% confidence... if >3sd you reject the model with 99% confidence. The test says nothing about how confident you can be in the model if you don't reject it - since you don't know how many other models these results could support and, like you said, you risk erroneously accepting unrelated results as supporting the model.
Thus a single experiment cannot confirm a theory - only reject it.
Would doing that help in calculating the systematic uncertainty of the final results?
Doing what? A control run?
Control runs are
essential to working out the systematic uncertainty in final results.
Always do a control run. Your measurements are
invalid without one.
If you don't compare your yardstick with a known reliable yard, why should anyone believe any measurements you make with it? Especially considering the stakes if your measurements should come out different from those expected from otherwise established models?