What is zero error of a measuring instrument ?

Click For Summary
Zero error in a measuring instrument refers to the systematic error caused by misalignment or incorrect graduation, resulting in measurements that deviate from the actual value. It occurs when the scales of a measuring instrument are not properly leveled at the zero point, leading to discrepancies in readings. Examples include galvanometers and micrometers that require adjustments to ensure accurate zero readings. While zero errors can often be corrected by subtracting the error from measurements, non-uniform scales present a more complex challenge. Understanding and calibrating instruments to eliminate zero error is essential for accurate measurements.
SSG-E
Messages
60
Reaction score
12
Homework Statement
I cannot come up with a correct definition and explanation for zero error.
Relevant Equations
In case of vernier callipers:
actual reading = main scale + vernier scale − (zero error).
Is this correct?
"The systematic error in a measuring instrument due to non-uniform or wrongly marked graduation due to which a measurement may be less or greater than actual measurement is called zero error of the measuring instrument".

Another one:

The measuring instruments are combination of two or more scales. The accuracy of these scales depend on each other. If these scales are not on zero level with respect to each other then zero error occurs. So we can say that:
"Th error which occurs in a measuring instrument due to non-levelling of scales on zero point is called zero error.
 
Physics news on Phys.org
I have no idea of the definition of zero error, but I have a couple of examples of zero discrepancies on my instruments.
I have several moving coil galvanometers, where magnetic torque is balanced by torque from a spring or taut wire. All have an adjustment to the tension in the spring or wire, so that you can get a reading of zero when no current flows. If you haven't checked for a while, you'd get an error.
I have a micrometer that does not read zero when the jaws are closed, so I subtract the zero reading from any reading I take.
Thinking about it now, there is also a barometer and a dial thermometer with such adjustment.

Rulers may have a zero error resulting from the way they are used. You won't do it, but at school one had to remind people not to measure from the end of the ruler, but from the start of the scale.
Rulers with no guard could get damaged and give a zero error.
My steel tape measue has an end riveted through a slot, so that the end can move by the thickness of the metal end. If you measure externally, you pull the end out and measure from the inside face, or for internal measurement, push the end in and measure from the outside face. Without this there could be a small zero error up to the thickness of the endpiece.

Looking at your definitions, the first one sounds ok, except that I would not call non-uniform graduation a zero error. A zero error shifts the whole scale up or down and can be corrected, as you indicate, by subtracting the error. That would not work for a non-uniform scale. For galvanometers we had calibration charts to correct for non-uniformity, but the zero error needed to be adjusted out.
The second one simply doesn't remind me of instruments I know. If such an instrument did rely on the alignment of two scales, then the discrepancy between the zero points would qualify and could be corrected for.
I suppose you could say that of my micrometer, but I don't regard the linear and rotary parts as two scales. They make one scale, with the crude linear part simply keeping track of the number of turns of the rotary part.

As for an explanation. If an instrument doesn't read zero when measuring zero, then that error should be present for all other readings. If the scale is linear, the error can be corrected by simply subtracting the zero error. For a non-linear scale, it could be more difficult.
 
  • Like
Likes SSG-E
The book claims the answer is that all the magnitudes are the same because "the gravitational force on the penguin is the same". I'm having trouble understanding this. I thought the buoyant force was equal to the weight of the fluid displaced. Weight depends on mass which depends on density. Therefore, due to the differing densities the buoyant force will be different in each case? Is this incorrect?

Similar threads

  • · Replies 1 ·
Replies
1
Views
744
  • · Replies 2 ·
Replies
2
Views
652
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
5K