How do I state uncertainty when calibrating a flowmeter?

Click For Summary
SUMMARY

The discussion focuses on calibrating turbine flowmeters for fuel rigs, specifically addressing the need to establish a reliable process that includes uncertainty measurement at a 95% confidence level. The current method involves taking 20 data points and applying the least squares method for calibration, but it lacks robustness in handling outliers. Participants suggest incorporating the Grubb outlier method and using standard deviation (SD) to define outliers, advocating for a more statistically sound approach that simplifies the process for shop floor personnel. The use of LabVIEW for data capture and plotting is also highlighted as a critical tool in this calibration process.

PREREQUISITES
  • Understanding of turbine flowmeter calibration processes
  • Familiarity with least squares regression analysis
  • Knowledge of statistical concepts such as standard deviation and outlier detection
  • Experience with LabVIEW for data acquisition and analysis
NEXT STEPS
  • Research the Grubb outlier detection method for calibration processes
  • Learn how to calculate standard deviation and its application in flowmeter calibration
  • Explore mean squared error as a metric for evaluating calibration accuracy
  • Investigate advanced features of LabVIEW for enhanced data visualization and analysis
USEFUL FOR

This discussion is beneficial for calibration engineers, quality assurance personnel, and anyone involved in the calibration of flowmeters, particularly in the fuel industry, who seeks to improve measurement accuracy and reliability.

robsmith82
Messages
17
Reaction score
0
Hi,

I'm having a lot of trouble trying to set up a process for calibrating turbine flowmeters for use in fuel rigs.

Basically, I have a calibrated "master" flowmeter in series with the uncalibrated turbine flowmeter. I need a process that will give me the line of best fit across the flowmeters range, and also state its uncertainty to a given confidence level (probably 95%)

At the minute, our process isn't very good. We take 10 points going up the range and 10 points down, then take the line of best fit using the least squares method, which gives you your gain and offset for calibration. Then we say, if any point is more than 0.5% of full scale from the line, the calibration has failed. We take outliers into account by saying you are allowed 2 points between 0.5% and 1%, but not at the top and bottom limits and not consecutive. I think we should be using the grubb outlier method.

What I think I have to do is find the mean and SD for each value up the range, then use the worst case SD to calculate the uncertainty. This needs to be carried out by shop floor personnel, so can't be too complicated. We are using labview to capture the data, plot the line and show "pass" or "fail"

Help me out guys!
 
Physics news on Phys.org
Instead of working with absolute magnitudes (0.5% of the true line) you can find the SD of the sample then work with multiples of the SD; e.g. define an outlier as > 2*SD. Or you can work with mean squared error, which takes into account the bias and the SD.
 

Similar threads

  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
24
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K