Elementary Charge: Explaining the Integer Multiple of e

AI Thread Summary
The discussion centers on the concept of elementary charge, denoted as e, which is approximately 1.602 x 10^(-19) coulombs. It addresses the apparent contradiction of macroscopic charges, such as 1 coulomb, being integer multiples of e, given that 1 is not divisible by e without a remainder. Participants explain that while the elementary charge is fundamental and indivisible, the coulomb is a practical unit defined by historical measurements, making it an approximation rather than an exact integer multiple. The conversation highlights the arbitrary nature of SI units and the challenges in measuring and expressing charge at such small scales. Ultimately, the relationship between macroscopic charge and elementary charge reflects both theoretical and practical considerations in physics.
matangi7
Messages
7
Reaction score
0
My textbook states:
"The magnitude of charge of the electron or proton is a natural unit of charge."
and then has an explanation that follows. It states, "...The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge."
Here is what I understand:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
My question: If the charge on a body has to be an integer multiple of the elementary charge e, then why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?

Please let me know if something I have understood is wrong. Thanks!
 
Physics news on Phys.org
I think that if we were to look at everything literally in terms of perfect multiples of such a small charge, it would be quite a pain. The exact values of the charges when comparing 1 Coulomb and ##1.602*10^(-19)## Coulombs become too difficult to maintain.

While the actual charge of an object might be a multiple of ##1.602*10^(-19)## Coulombs, we can't keep writing all the charges in the universe as a multiple of the elementary charge since it is simply impractical.
 
matangi7 said:
1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?
The Coulomb has an arbitrary value, based on the flow of 1Amp for 1second. The Ampere is based on the Force on a wire under certain conditions. Definitions on top of definitions and a very practical based system of units. This is the reality of measuring things.
But no one even hoped that the SI unit of charge would correspond to an integer number of electronic charges. For a start, how would you ever hope to measure the 'remainder' that you propose?
Edit: If the electronic charge happened to be a bit bigger (in a totally different world, of course) then I guess Scientists would have modified the definition of the Coulomb to be a measurable / countable integer number of electronic charges. All the SI units are arbitrary and would ideally be based on convenient methods of independently reproducing a standard from scratch. But the kg is still based on standard 1kg platinum (?) cylinders, stored in Paris. Not very satisfactory, perhaps.
 
Last edited:
First, we don't know the electron charge well enough to say that 1 Coloumb cannot be an integer number of charges. 1 C could be somewhere between around 6241509087000000000 and 6241509160000000000 elementary charges.

Second, so what if exactly 1 C to twenty decimal places is not realizable in nature? It's not realizable now for technical reasons, and that doesn't mean its not useful.
 
  • Like
Likes nasu
You have to look at the problem in historical perspective. What do you thing was known first the force between charged bodies or the electron and its properties. While studying charged bodies and forces between them we arbitrarily decided that 1 coulomb should be the unit of charge. That was also an after thought! Earlier scientists had defined esu of charge as unit charge. It was defined as that charge which will repel the identical charge by one dyne when kept at a distance of one centimeter. The ratio of these two historical units may also not be a rational number. After the discovery of electron and knowing the details of properties of materials and their relation to electron and electronic charge now we know that all charges are actual multiples of this fundamental charge. The system of units that we use is arbitrary and we do use different units in different fields. What we require is just a conversion formula to a reasonable degree of accuracy. In certain fields of research we do use electronic charge as the unit of charge. There is a natural system of units in which h cross and c are taken as 1 along with electronic charge as a unit of charge. Now your query is very genuine, what will be one coulomb equal to how many electronic charges and that has to be number without decimal point. But the number will have 19 digits whose significance will depend upon the experimental determination of the fundamental electronic charge. So practically speaking it will be a natural number only with last digits as zeroes.

You see a parallel situation is that of Avogadro number. Its very definition tells us that it should be natural number. Now you can search the literature listing experimental methods which determine Avogadro number and you will find that its determination depends on the uncertainty in measurement of masses of atoms and molecules. As much as I know it is 6.023... *10^23. So what do you think it is a natural number or not?
 
Let'sthink said:
You have to look at the problem in historical perspective. What do you thing was known first the force between charged bodies or the electron and its properties. While studying charged bodies and forces between them we arbitrarily decided that 1 coulomb should be the unit of charge. That was also an after thought! Earlier scientists had defined esu of charge as unit charge. It was defined as that charge which will repel the identical charge by one dyne when kept at a distance of one centimeter. The ratio of these two historical units may also not be a rational number. After the discovery of electron and knowing the details of properties of materials and their relation to electron and electronic charge now we know that all charges are actual multiples of this fundamental charge. The system of units that we use is arbitrary and we do use different units in different fields. What we require is just a conversion formula to a reasonable degree of accuracy. In certain fields of research we do use electronic charge as the unit of charge. There is a natural system of units in which h cross and c are taken as 1 along with electronic charge as a unit of charge. Now your query is very genuine, what will be one coulomb equal to how many electronic charges and that has to be number without decimal point. But the number will have 19 digits whose significance will depend upon the experimental determination of the fundamental electronic charge. So practically speaking it will be a natural number only with last digits as zeroes.

You see a parallel situation is that of Avogadro number. Its very definition tells us that it should be natural number. Now you can search the literature listing experimental methods which determine Avogadro number and you will find that its determination depends on the uncertainty in measurement of masses of atoms and molecules. As much as I know it is 6.023... *10^23. So what do you think it is a natural number or not?
Historically, yes but in the most recent revision to the SI, the Avogadro number is to be an exact value, as are h, e, and kB.
http://iopscience.iop.org/article/10.1088/1681-7575/aa950a/pdf
https://www.bipm.org/utils/en/pdf/CIPM/CIPM2017-EN.pdf?page=23
 
matangi7 said:
My textbook states:
"The magnitude of charge of the electron or proton is a natural unit of charge."
and then has an explanation that follows. It states, "...The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge."
Here is what I understand:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
My question: If the charge on a body has to be an integer multiple of the elementary charge e, then why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?

Please let me know if something I have understood is wrong. Thanks!

1 Coulomb is a rounded off number. It doesn't imply "1.00000000000000000000000000000000000000000000" Coulombs with that kind of accuracy.

This is another reason why we try to impress upon students to understand the significance of significant figures in General Physics classes.

Zz.
 
  • Like
Likes sophiecentaur
matangi7 said:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
The current best-known value is actually (1.6021766208 ± 0.0000000098) x 10^(-19) C.
matangi7 said:
why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?
Assuming for the sake of argument that the value I gave above is really exact (± 0), then it's indeed not possible to have a value of exactly 1 C. The closest values we can get are 6241509125883257926e = 0.99999999999999999991734964608 C and 6241509125883257927e = 1.00000000000000000007756730816 C. However, with current measurement technology, there is no way we can distinguish experimentally between these values and exactly 1 C. If we could, then we would know the value of e to several more decimal places than we do now.
 
matangi7 said:
My textbook states:
"The magnitude of charge of the electron or proton is a natural unit of charge."
and then has an explanation that follows. It states, "...The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge."
Here is what I understand:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
My question: If the charge on a body has to be an integer multiple of the elementary charge e, then why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?

Please let me know if something I have understood is wrong. Thanks!

Let's try an analogy: say for sake of argument that a tennis ball has a mass of 0.05 kg. Now that could be 0.04996, or maybe 0.533, but it doesn't really matter. You can only have an integer multiple of tennis balls in your possession. Unless of course, you slice one open and then you could have some fractional part ... but that's because tennis balls are divisible, i.e. in a physics context they are not elementary. But if we keep it simple and forego any slicing, it wouldn't matter how much mass is there in your tennis ball collection...because that's not your concern. You just want to know how many tennis balls you have.

An electron, however, is not divisible -- it's elementary and fundamental -- it can't be sliced up. You can certainly deal with the (very small) coulomb property of electrons if you wish, and conceptually slice it up that way, but in the issue of the "total charge of a macroscopic body", again this is not necessarily your concern. You just want to know how much charge you have.

Now protons are a different story, because these are not fundamental particles. As it turns out, protons are made up of three quarks, each of which with 1/3e. But the charge on a macroscopic body is much more likely to consist of electrons, since these are much easier to remove from their atoms than a proton would be. Much much easier, in fact...
 
  • #10
Vagn said:
Historically, yes but in the most recent revision to the SI, the Avogadro number is to be an exact value, as are h, e, and kB.
http://iopscience.iop.org/article/10.1088/1681-7575/aa950a/pdf
https://www.bipm.org/utils/en/pdf/CIPM/CIPM2017-EN.pdf?page=23
I have looked into the quoted reference, I could not see value for e but for N one of the values was:
metaa950aieqn018.gif
mol−1 Is it exact then what is that (18) there?
 

Attachments

  • metaa950aieqn018.gif
    metaa950aieqn018.gif
    2 KB · Views: 1,242
  • #11
matangi7 said:
"The magnitude of charge of the electron or proton is a natural unit of charge."
The fact that is 'natural (anyone in the Universe could use the same basic unit) doesn't make it useful or practical. If you want a 'natural' unit that is, at the same time, useful and practical, then use the wavelength of a transition for a common and well behaved (gas) element. Hydrogen does this fine as a basis for length.
 
  • #14
Let'sthink said:
So experimentally at least it is not a definite number. Because it is so large it is a natural number which we must always take the same.
It (Avogadro's number) is not determined well enough to make the question of whether it is a natural number meaningful or relevant.

To the best of my knowledge, Avogadro's number is currently (up until November of this year) defined as the number of atoms in a 12 gram lump of Carbon-12. The gram is, in turn, defined in terms of the mass of a particular hunk of Platinum-Iridium alloy stored in a vault in France.

There is no reason to suppose that an ideal realization of this definition must produce a result which is a natural number. [And of course, no actual realization comes anywhere near deciding the matter]
 
Back
Top