Elementary Charge: Explaining the Integer Multiple of e

In summary, the magnitude of charge of the electron or proton is a natural unit of charge, known as the elementary charge e. The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge. While it may seem impractical to have values like 1 coulomb, which is not an exact multiple of e, the Coulomb is an arbitrary unit based on practical measurements and the exact value of the electron charge is not known to enough accuracy to say that 1 Coulomb cannot be an integer multiple of e. The concept of an exact integer number of electronic charges making up a Coulomb is also complicated by the historical development of units and the uncertainty in experimental measurements of the fundamental
  • #1
matangi7
7
0
My textbook states:
"The magnitude of charge of the electron or proton is a natural unit of charge."
and then has an explanation that follows. It states, "...The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge."
Here is what I understand:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
My question: If the charge on a body has to be an integer multiple of the elementary charge e, then why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?

Please let me know if something I have understood is wrong. Thanks!
 
Physics news on Phys.org
  • #2
I think that if we were to look at everything literally in terms of perfect multiples of such a small charge, it would be quite a pain. The exact values of the charges when comparing 1 Coulomb and ##1.602*10^(-19)## Coulombs become too difficult to maintain.

While the actual charge of an object might be a multiple of ##1.602*10^(-19)## Coulombs, we can't keep writing all the charges in the universe as a multiple of the elementary charge since it is simply impractical.
 
  • #3
matangi7 said:
1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?
The Coulomb has an arbitrary value, based on the flow of 1Amp for 1second. The Ampere is based on the Force on a wire under certain conditions. Definitions on top of definitions and a very practical based system of units. This is the reality of measuring things.
But no one even hoped that the SI unit of charge would correspond to an integer number of electronic charges. For a start, how would you ever hope to measure the 'remainder' that you propose?
Edit: If the electronic charge happened to be a bit bigger (in a totally different world, of course) then I guess Scientists would have modified the definition of the Coulomb to be a measurable / countable integer number of electronic charges. All the SI units are arbitrary and would ideally be based on convenient methods of independently reproducing a standard from scratch. But the kg is still based on standard 1kg platinum (?) cylinders, stored in Paris. Not very satisfactory, perhaps.
 
Last edited:
  • #4
First, we don't know the electron charge well enough to say that 1 Coloumb cannot be an integer number of charges. 1 C could be somewhere between around 6241509087000000000 and 6241509160000000000 elementary charges.

Second, so what if exactly 1 C to twenty decimal places is not realizable in nature? It's not realizable now for technical reasons, and that doesn't mean its not useful.
 
  • Like
Likes nasu
  • #5
You have to look at the problem in historical perspective. What do you thing was known first the force between charged bodies or the electron and its properties. While studying charged bodies and forces between them we arbitrarily decided that 1 coulomb should be the unit of charge. That was also an after thought! Earlier scientists had defined esu of charge as unit charge. It was defined as that charge which will repel the identical charge by one dyne when kept at a distance of one centimeter. The ratio of these two historical units may also not be a rational number. After the discovery of electron and knowing the details of properties of materials and their relation to electron and electronic charge now we know that all charges are actual multiples of this fundamental charge. The system of units that we use is arbitrary and we do use different units in different fields. What we require is just a conversion formula to a reasonable degree of accuracy. In certain fields of research we do use electronic charge as the unit of charge. There is a natural system of units in which h cross and c are taken as 1 along with electronic charge as a unit of charge. Now your query is very genuine, what will be one coulomb equal to how many electronic charges and that has to be number without decimal point. But the number will have 19 digits whose significance will depend upon the experimental determination of the fundamental electronic charge. So practically speaking it will be a natural number only with last digits as zeroes.

You see a parallel situation is that of Avogadro number. Its very definition tells us that it should be natural number. Now you can search the literature listing experimental methods which determine Avogadro number and you will find that its determination depends on the uncertainty in measurement of masses of atoms and molecules. As much as I know it is 6.023... *10^23. So what do you think it is a natural number or not?
 
  • #6
Let'sthink said:
You have to look at the problem in historical perspective. What do you thing was known first the force between charged bodies or the electron and its properties. While studying charged bodies and forces between them we arbitrarily decided that 1 coulomb should be the unit of charge. That was also an after thought! Earlier scientists had defined esu of charge as unit charge. It was defined as that charge which will repel the identical charge by one dyne when kept at a distance of one centimeter. The ratio of these two historical units may also not be a rational number. After the discovery of electron and knowing the details of properties of materials and their relation to electron and electronic charge now we know that all charges are actual multiples of this fundamental charge. The system of units that we use is arbitrary and we do use different units in different fields. What we require is just a conversion formula to a reasonable degree of accuracy. In certain fields of research we do use electronic charge as the unit of charge. There is a natural system of units in which h cross and c are taken as 1 along with electronic charge as a unit of charge. Now your query is very genuine, what will be one coulomb equal to how many electronic charges and that has to be number without decimal point. But the number will have 19 digits whose significance will depend upon the experimental determination of the fundamental electronic charge. So practically speaking it will be a natural number only with last digits as zeroes.

You see a parallel situation is that of Avogadro number. Its very definition tells us that it should be natural number. Now you can search the literature listing experimental methods which determine Avogadro number and you will find that its determination depends on the uncertainty in measurement of masses of atoms and molecules. As much as I know it is 6.023... *10^23. So what do you think it is a natural number or not?
Historically, yes but in the most recent revision to the SI, the Avogadro number is to be an exact value, as are h, e, and kB.
http://iopscience.iop.org/article/10.1088/1681-7575/aa950a/pdf
https://www.bipm.org/utils/en/pdf/CIPM/CIPM2017-EN.pdf?page=23
 
  • #7
matangi7 said:
My textbook states:
"The magnitude of charge of the electron or proton is a natural unit of charge."
and then has an explanation that follows. It states, "...The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge."
Here is what I understand:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
My question: If the charge on a body has to be an integer multiple of the elementary charge e, then why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?

Please let me know if something I have understood is wrong. Thanks!

1 Coulomb is a rounded off number. It doesn't imply "1.00000000000000000000000000000000000000000000" Coulombs with that kind of accuracy.

This is another reason why we try to impress upon students to understand the significance of significant figures in General Physics classes.

Zz.
 
  • Like
Likes sophiecentaur
  • #8
matangi7 said:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
The current best-known value is actually (1.6021766208 ± 0.0000000098) x 10^(-19) C.
matangi7 said:
why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?
Assuming for the sake of argument that the value I gave above is really exact (± 0), then it's indeed not possible to have a value of exactly 1 C. The closest values we can get are 6241509125883257926e = 0.99999999999999999991734964608 C and 6241509125883257927e = 1.00000000000000000007756730816 C. However, with current measurement technology, there is no way we can distinguish experimentally between these values and exactly 1 C. If we could, then we would know the value of e to several more decimal places than we do now.
 
  • #9
matangi7 said:
My textbook states:
"The magnitude of charge of the electron or proton is a natural unit of charge."
and then has an explanation that follows. It states, "...The charge on any macroscopic body is always either zero or an integer multiple (negative or positive) of the electron charge."
Here is what I understand:
This "electron charge" that is being referred to is the elementary charge e.
e is equivalent to 1.602 x 10^(-19) coulombs.
My question: If the charge on a body has to be an integer multiple of the elementary charge e, then why is it possible to have values like 1 coulomb, when 1 is not divisible by 1.602 x 10^(-19) without having a remainder/ decimal/ non-integer?

Please let me know if something I have understood is wrong. Thanks!

Let's try an analogy: say for sake of argument that a tennis ball has a mass of 0.05 kg. Now that could be 0.04996, or maybe 0.533, but it doesn't really matter. You can only have an integer multiple of tennis balls in your possession. Unless of course, you slice one open and then you could have some fractional part ... but that's because tennis balls are divisible, i.e. in a physics context they are not elementary. But if we keep it simple and forego any slicing, it wouldn't matter how much mass is there in your tennis ball collection...because that's not your concern. You just want to know how many tennis balls you have.

An electron, however, is not divisible -- it's elementary and fundamental -- it can't be sliced up. You can certainly deal with the (very small) coulomb property of electrons if you wish, and conceptually slice it up that way, but in the issue of the "total charge of a macroscopic body", again this is not necessarily your concern. You just want to know how much charge you have.

Now protons are a different story, because these are not fundamental particles. As it turns out, protons are made up of three quarks, each of which with 1/3e. But the charge on a macroscopic body is much more likely to consist of electrons, since these are much easier to remove from their atoms than a proton would be. Much much easier, in fact...
 
  • #10
Vagn said:
Historically, yes but in the most recent revision to the SI, the Avogadro number is to be an exact value, as are h, e, and kB.
http://iopscience.iop.org/article/10.1088/1681-7575/aa950a/pdf
https://www.bipm.org/utils/en/pdf/CIPM/CIPM2017-EN.pdf?page=23
I have looked into the quoted reference, I could not see value for e but for N one of the values was:
metaa950aieqn018.gif
mol−1 Is it exact then what is that (18) there?
 

Attachments

  • metaa950aieqn018.gif
    metaa950aieqn018.gif
    2 KB · Views: 1,151
  • #11
matangi7 said:
"The magnitude of charge of the electron or proton is a natural unit of charge."
The fact that is 'natural (anyone in the Universe could use the same basic unit) doesn't make it useful or practical. If you want a 'natural' unit that is, at the same time, useful and practical, then use the wavelength of a transition for a common and well behaved (gas) element. Hydrogen does this fine as a basis for length.
 
  • #14
Let'sthink said:
So experimentally at least it is not a definite number. Because it is so large it is a natural number which we must always take the same.
It (Avogadro's number) is not determined well enough to make the question of whether it is a natural number meaningful or relevant.

To the best of my knowledge, Avogadro's number is currently (up until November of this year) defined as the number of atoms in a 12 gram lump of Carbon-12. The gram is, in turn, defined in terms of the mass of a particular hunk of Platinum-Iridium alloy stored in a vault in France.

There is no reason to suppose that an ideal realization of this definition must produce a result which is a natural number. [And of course, no actual realization comes anywhere near deciding the matter]
 

1. What is elementary charge?

Elementary charge, denoted by the symbol e, is the fundamental unit of electric charge in the International System of Units (SI). It is the charge carried by a single proton or electron.

2. Why is the elementary charge an integer multiple of e?

This is due to the fact that electric charge is quantized, meaning it can only exist in discrete, integer multiples of the elementary charge. This has been experimentally observed and is a fundamental property of the universe.

3. How is the value of e determined?

The value of e is determined through various experiments, such as the Millikan oil drop experiment and the Faraday's law of electrolysis. These experiments measure the charge on individual particles and have shown that the value of e is approximately 1.602 x 10^-19 coulombs.

4. Can the elementary charge change?

No, the elementary charge is a fundamental constant and is not known to change. However, there are theories in physics that suggest it may vary in different physical contexts, but this has not been experimentally proven.

5. How is the concept of elementary charge related to atomic structure?

The elementary charge is closely related to atomic structure as it is the charge carried by individual particles within an atom, such as protons and electrons. The arrangement and movement of these particles determine the overall charge and behavior of an atom.

Similar threads

  • Electromagnetism
2
Replies
36
Views
3K
Replies
11
Views
869
Replies
14
Views
1K
  • Electromagnetism
Replies
14
Views
2K
Replies
2
Views
929
Replies
35
Views
2K
Replies
2
Views
2K
Replies
3
Views
847
Replies
11
Views
2K
Replies
1
Views
707
Back
Top