- #1
icekin
- 4
- 0
I've got a power cord on which I am testing the resistance of the Earth wire. I've used 2 different test machines, a MEM ELEC-3M and a Bio-Tek 601 Pro. The 601 can use 2 different test currents, 10A and 1A. The 3M uses only 1 test current, which is slightly less than 1A.
The minimum standard for a pass is a resistance lower than 0.2 ohm between the 2 ends of the Earth wire. The wire is 3 metres long. I normally use the 3M for testing. Sometimes, I get a wire that has more than 0.2 ohm and I usually declare it a fail. But recently, a colleague showed me that using the 601, I could send 10A through the failed wire and hence "fix" it somehow. He explained that a large current would somehow free the electrons and make the wire more conductive, reducing the resistance. I was not sure, but I've tested his theory and its true. When I checked the failed wire using a 10A current, the recorded resistance is lower. Then, when I bring the wire back to the 3M and test using a 1A current, it seems the resistance is now well under 0.2 ohm.
In addition, if I use the 1A current on the 601, I get a lower resistance reading versus using the 10A. I believe the machines, which are basically digital multimeters, simply divide the voltage across the ends of the wire by the current through the wire to get the measured resistance. In that case, why the difference in the measurement on the same test machine? I'd also appreciate if someone could tell me if my colleague's theory of more current fixing the wire is correct.
The minimum standard for a pass is a resistance lower than 0.2 ohm between the 2 ends of the Earth wire. The wire is 3 metres long. I normally use the 3M for testing. Sometimes, I get a wire that has more than 0.2 ohm and I usually declare it a fail. But recently, a colleague showed me that using the 601, I could send 10A through the failed wire and hence "fix" it somehow. He explained that a large current would somehow free the electrons and make the wire more conductive, reducing the resistance. I was not sure, but I've tested his theory and its true. When I checked the failed wire using a 10A current, the recorded resistance is lower. Then, when I bring the wire back to the 3M and test using a 1A current, it seems the resistance is now well under 0.2 ohm.
In addition, if I use the 1A current on the 601, I get a lower resistance reading versus using the 10A. I believe the machines, which are basically digital multimeters, simply divide the voltage across the ends of the wire by the current through the wire to get the measured resistance. In that case, why the difference in the measurement on the same test machine? I'd also appreciate if someone could tell me if my colleague's theory of more current fixing the wire is correct.