Rectifying an conventional error

  • Thread starter BogMonkey
  • Start date
  • Tags
    Error
In summary, the conversation discusses the concept of conventional current, which defines current as the flow of positive charges from positive to negative. This convention was established by Benjamin Franklin, who originally studied static electricity. However, with the discovery of electrons, it is now known that electric current in most materials consists of moving electrons. This has caused confusion in textbooks, as some still mention conventional current while others use the direction of electron flow. Ultimately, the use of conventional current is not necessary and can be avoided to prevent confusion for students.
  • #1
BogMonkey
76
0
Rectifying an "conventional error"

In textbooks I always see "by convention it is said to flow from positive to negative but in reality its from negative to positive" and this is why I always end up explaining that when I get asked a question about how current works or how batteries work on a test. Do they say this in more modern textbooks only to inform you that you may come across older text that states describes it inaccurately or do they still put this "conventional" idea of how current flows into textbooks?
 
Physics news on Phys.org
  • #2


I've only ever heard negative to positive.
 
  • #3


why would it tell you positive to negative? that doesn't make any sense to me.
 
  • #4


You guys have never heard of conventional current? By convention, current is defined as the flow of positive charges. Practically speaking, it doesn't really make a difference, because there are very few situations in which you would be able to tell the difference (observationally) between positive charges flowing in one direction and negative charges flowing in the opposite direction. In fact, you have to go to a certain amount of trouble to create a very specific experimental set up involving magnetic fields if you want to be able to tell the difference. The OP is right. A lot of textbooks use conventional current.
 
  • #5


Benjamin Franklin invented the use of "positive" and "negative" for electric charge, while studying electricity around 1750, specifically what we now know as "static electricity", with charged objects that attract and repel each other. He had a theory of an "electric fluid" that is normally distributed evenly among all objects. When you rub certain objects together, it transfers some of this fluid from one object to the other. One object now has an excess of electric fluid, which he called "positively charged." The other has a deficit of electric fluid, which he called "negatively charged." But he couldn't actually see which way the electric fluid flowed, so he had to guess, and he randomly designated one group of objects as "positive" and the others "negative." Other people adopted Franklin's convention for "positive" and "negative."

In the late 1800s, electrons were discovered, and it became clear that electric current in most materials actually consists of moving electrons. But in terms of the long-established convention, electrons flow from "negative" to "positive."

People were so used to current flowing from certain objects (positive) to others (negative) that it was easier to call electrons negatively charged, and continue talking about the direction of current in terms of moving positive charges, than to reverse the direction of current and re-label all positively charged objects as negative and vice versa. People talk about the actual direction of electron flow only when it really matters, which isn't very often.
 
  • #6


jtbell said:
People were so used to current flowing from certain objects (positive) to others (negative) that it was easier to call electrons negatively charged, and continue talking about the direction of current in terms of moving positive charges, than to reverse the direction of current and re-label all positively charged objects as negative and vice versa. People talk about the actual direction of electron flow only when it really matters, which isn't very often.

The positive and negative labels are probably even further entrenched (in common culture) now, but I don't think any presumption still exists that current should flow positive to negative (it wouldn't make sense even naively, now that electrons are well-known to be negative), and nor is there any technical advantage to either choice of labelling (since we now know that both positive and negative charge carriers actually exist).

So I think textbooks really can drop all mention of "conventional current" (and in so doing, avoid a source of confusion). Explain the hand rules in terms of directions of net signed charge velocity, rather than just of "current" (with the conventional/real ambiguity).

Really, I think this is a common problem - textbooks reciting historic misconceptions rather than finding a direct approach to modern ideas. For example, Bondi criticises the teaching of Mitchelson-Morley experiment, (not only because it is historically dubious but) since from a post-SR worldview we should define distance in terms of light-speed (and atomic clocks), which makes it impossible by definition for the M-M to give a non-null measurement. So, unless we're deliberately teaching history instead of science, why waste the student's time by first always trying to build in their head a conceptual foundation that is known to be utterly incompatible with the modern understanding that we ultimately intend them to develop?
 
Last edited:
  • #7


In electrical engineering we use normally the conventional sense. Current is supposed to consist of positive charges flowing from the positive to the negative terminal.
In a semiconductor there is movement of both positive and negative charges, so it is easier to use a convention, instead of reasoning with the two kinds of charges flowing in opposite directions.
 
  • #8


cesiumfrog said:
So I think textbooks really can drop all mention of "conventional current" (and in so doing, avoid a source of confusion). Explain the hand rules in terms of directions of net signed charge velocity, rather than just of "current" (with the conventional/real ambiguity).


So your plan is that if something takes a bit of thought, then one should never mention it so students never need to think or understand it? Well, like most truly bad ideas this has been done before. Some time ago there was a fad among textbook writers to define current in the opposite direction, namely in the direction of the flow of electrons. And then as icing on the cake they changed all the "hand rules" from right hand rules to left hand rules. It was idiotic and confusing beyond conception. I was a victim of some of this nonsense, but luckily I recovered over time.

Today most textbooks have come to their senses and use the conventional conventions.

Let's face it, sure most electrical currents in the vast scope of electrical devices travel in copper conductors which means that the charge carriers are electrons. But they certainly aren't the ONLY current carriers. positive charges carry currents in ionized gasses and semiconductors. And generally speaking the details of what exactly is carrying current isn't the point of current conventions. I don't know why people seem to think that naming conventions somehow have to follow what happens in metals?

The point of a current convention is to keep your head straight when calculating physics and electrical phenomena. This is a math thing. And students NEED to understand how conventions are applied in math. Now pure mathematicians LOVE to stand up and say "I can define this any way I want!" and then often proceed to do so to the utter confusion of most students and often themselves. Another thing mathematicians love to do is change the naming conventions of variables. They are so proud that they can work a problem with obscure letters substituted for the ones the rest of the world uses. Confusion reigns supreme. One of my favorites was when a mathematician I know wrote a fractal program for a PC. But being a mathematician, he decided to use the 'escape" key instead of the "return" key. It was a wonderful program but nobody could actually use it...including the guy who wrote it!

Point is that conventions like all standards make thing easier not harder to understand. Current conventions, dots on transformers, and others allow us all to communicate with one another clearly without pomposity and ego. Hopefully, one day students will need that knowledge!
 
  • #9


Take it from a working electrical engineer: The easy way is to talk about current as the flow of positive charge. In the rare cases when you need to discuss electrons, you just make it part of your psyche that electrons flow the opposite way because they're negative. Any other approach will lead to mass confusion. It becomes second nature.

Edit: If you'd like to climb aboard the confusion train at the next stop, just pretend that the antimatter part of the universe got it right. Over there, the positrons actually go the right way. So Ben Franklin screwed it up but the 20th century physicists blew their one opportunity to set it strait by declaring that *we're* all made of the *true* antimatter!
 
Last edited:

What is the process of rectifying a conventional error?

Rectifying a conventional error involves identifying the error, understanding its root cause, and implementing corrective measures to prevent it from recurring in the future.

What are some common types of conventional errors?

Some common types of conventional errors include data entry errors, human error in conducting experiments, and errors caused by faulty equipment or instruments.

How can I prevent conventional errors in my research or experiments?

To prevent conventional errors, it is important to implement proper protocols and procedures, double-check data entry and calculations, and regularly maintain and calibrate equipment. Additionally, having a second person review your work can also help catch any potential errors.

What are the consequences of not rectifying a conventional error?

Not rectifying a conventional error can lead to inaccurate or invalid results, which can have significant consequences in scientific research. It can also lead to wasted time, resources, and potentially harm to individuals or the environment.

Can conventional errors ever be completely eliminated?

While it is impossible to completely eliminate all conventional errors, steps can be taken to minimize their occurrence. This includes implementing proper training and protocols, regularly reviewing and verifying data, and maintaining and calibrating equipment.

Similar threads

Replies
20
Views
2K
Replies
1
Views
820
  • Other Physics Topics
Replies
22
Views
3K
  • Introductory Physics Homework Help
Replies
12
Views
207
Replies
4
Views
3K
  • Electrical Engineering
Replies
10
Views
484
Replies
7
Views
1K
Replies
3
Views
834
Replies
5
Views
799
Replies
9
Views
10K
Back
Top