MIT's Walter Lewin's twice surprises the EE professors! (fun)by Cyberkatru Tags: lewin, professors, surprises, walter 
#1
Jan907, 05:00 AM

P: n/a

MIT physicist/astronomer is quite fun to what lecture. In lectures 20
and 16 of his online basic E&M lectures he makes some interesting points. I will tell you about both of them and ask your opinions but notice that the second one I mention (which actually comes first in lecture 18) is the most interesting and maybe a bit puzzling so be sure to get through this whole post. First from lecture 20: At some point a little past the 8 minute mark in Walter Lewin's basic E&M video lecture #20 he makes the following statement: "Almost every college physics book does this wrong!" He is referring to the use of Kirchoff's voltage rule for loops in cicuits with self inductors. I think he is right about this. In a way he is denying the assumptions of the socalled "lumped matter discipline" in electrical engineeringwith regard to inductors anyway. It is all quite entertaining. You can see the video here starting at about the 8 minute mark of lecture number 20. ..http://ocw.mit.edu/OcwWeb/Physics/8...ures/index.htm He also has a write up of this issue here: http://ocw.mit.edu/NR/rdonlyres/Phys...0/lecsup41.pdf So do we agree with him on this bit of basic physics 101? Now on to the more interesting case of "nonconservative" fields and wacky voltmeters: This starts at about minute 38 in video lecture 16 when he commands to "hold onto your hats!". A few minutes later gives a demonstation of this where he says professors visiting in the audience refused to believe what they were seeing. Do you agree with Walter's interpretation of what is happening? I think I basically agree but frankly I am not sure I like his insistance that this is a case of what should be called "nonconservative" fields (except in some unusual sense of nonconservative). Here is why: Mathematically, the field depends on time and so is now properly a field on spacetime. Now a closed loop in spacetime should return not only to the same place but also the same time. So he has not really demonstrated that the field is nonconservative in the usual mathematical sense (cohomology). He has just used a path that didn't really return to the same point in the manifold (spacetime). 


#2
Jan1107, 05:00 AM

P: n/a

Cyberkatru wrote:
> MIT physicist/astronomer is quite fun to what lecture. In lectures 20 > and 16 of his online basic E&M lectures he makes some interesting > points. ... > > http://ocw.mit.edu/NR/rdonlyres/Phys...0/lecsup41.pdf > > So do we agree with him on this bit of basic physics 101? > There is most certainly a widespead misconception here, but unfortunately this work does not find the real problem. The fundamental misconception is the identification between the electrostatic potential of Maxwell's theory and the circuittheory concept of voltage, which for reasons explained below should always be called "node voltage." We all understand that the Maxwell quantity is defined as limit of U(r)/Q as Q approaches zero, and U(r) is the potential energy of an infinitesmal test charge Q. The circuittheory voltage is actually () the electrochemical potential for mobile electrons within a chunk of matter that contains enough electrons to be considered an electron reservior, and that is in a local quasiequilibrium. This chunk of (almost always metallic) matter is represented in circuit space as a "node" in the circuit graph. The minus sign in the above definition of voltage is a result of the negative charge on the electron. (Benjamin Franklin had a even chance, but unfortunately he made the wrong choice.) The electrochemical potential for electrons is known in semiconductor device theory as the (quasi)Fermi level, not to be confused with other meanings of "Fermi energy" in solidstate physics. This is the quantity that voltmeters measure, as pointed out by William Shockley (Electrons and Holes in Semicnoductors, 1950, p. 305). To understand this, let's assume that the voltmeter is a classic Wheatstone Bridge, consisting of an ammeter connected between the unknown voltage node and an adjustable voltage standard (which can be just a battery connected to a potentiometer). The measurement procedure is to adjust the voltage standard until the current through the ammeter reads zero. The condition for zero (electron) current is that the electrochemical potentials for electrons on the two sides of the meter must be equal. When we understand node voltage to be this statistical quantity, all the Krichoff's Law paradoxes disappear. The potential energy of the electrons in the node can be raised or lowered by the electrostatic potential phi, or by dA/dt: the voltage is still a scalar quantity. Its gradient is NOT in general equal to the electric field E. Why has the confusion between these quantities persisted for so long? The first problem is that they are measured in the same units (Volts). The second reason is that > within a given metallic material < the voltage and the electrostatic potential are coupled by an additive constant, the work function. Connecting two metals with different work functions produces a structure with a constant voltage, but with a discontinuity in the electrostatic potential, which is known as a "contact potential." The additive linkage between potential and voltage is obviously broken in vacuum, and also in most semiconductor device structures. A very little known fact is that a difference in the spatial behavior of the potential and of the voltage is a necessary condition for the operation of active (i.e. gain producing) electron devices.  Bill Frensley 


#3
Jan1107, 05:00 AM

P: n/a

Cyberkatru wrote:
> MIT physicist/astronomer is quite fun to what lecture. In lectures 20 > and 16 of his online basic E&M lectures he makes some interesting > points. I will tell you about both of them and ask your opinions but > notice that the second one I mention (which actually comes first in > lecture 18) is the most interesting and maybe a bit puzzling so be sure > to get through this whole post. > > First from lecture 20: > At some point a little past the 8 minute mark in Walter Lewin's basic > E&M video lecture #20 he makes the following statement: > "Almost every college physics book does this wrong!" > He is referring to the use of Kirchoff's voltage rule for loops in > cicuits with self inductors. > I think he is right about this. > In a way he is denying the assumptions of the socalled "lumped matter > discipline" in electrical engineeringwith regard to inductors > anyway. > > It is all quite entertaining. You can see the video here starting at > about the 8 minute mark of lecture number 20. > .http://ocw.mit.edu/OcwWeb/Physics/8...ures/index.htm > > He also has a write up of this issue here: > > http://ocw.mit.edu/NR/rdonlyres/Phys...0/lecsup41.pdf > > So do we agree with him on this bit of basic physics 101? I don't get the last paragraph of the first page. "There is no electric field in this loop if the resistance of the wire making up the loop is zero." Then he says. "(this may bother you  if so, see the next section)" Well I read on and it didn't stop bothering me. When he is explaining this on page 4, in paragraph 3 he sets all the currents at every point in the loop to be the same, but in the very next paragraph he contradicts himself with charge buildups at certain points on the loop. > Now on to the more interesting case of "nonconservative" fields and > wacky voltmeters: > This starts at about minute 38 in video lecture 16 when he commands to > "hold onto your hats!". A few minutes later gives a demonstation of > this where he says professors visiting in the audience refused to > believe what they were seeing. > > Do you agree with Walter's interpretation of what is happening? > > I think I basically agree but frankly I am not sure I like his > insistance that this is a case of what should be called > "nonconservative" fields (except in some unusual sense of > nonconservative). Here is why: > > Mathematically, the field depends on time and so is now properly a > field on spacetime. Now a closed loop in spacetime should return not > only to the same place but also the same time. So he has not really > demonstrated that the field is nonconservative in the usual > mathematical sense (cohomology). He has just used a path that didn't > really return to the same point in the manifold (spacetime). 


#4
Jan1207, 05:00 AM

P: n/a

MIT's Walter Lewin's twice surprises the EE professors! (fun)
But Bill, he actually uses the voltmeter to make the point. He moves
the voltmeter without changing where it is connected and gets a different reading? This is puzzling if there is a real scalar function (node voltage) along the wire whose changes around a loop add to zero. This demonstration is in lecture 16. He actually used the physical voltmeter to show that the changes around the loop don't add to zero. Another thing that bothers me with your explanation is that you said that the node voltage and the electrostatic potential are related by a constant. How could an additive constant make a difference here? KVL wouldn't be affected by an additive constant would it? You might have the right explanation but it still not clear to me. It is almost as if you are saying that the line integral \int E \dot dl is not the relevant quantity for circuites. But this seems inconsitent with Jackson's book. Is your explanation written in and advanced E&M texts? On Jan 10, 4:07 pm, "William R. Frensley" <frens...@utdallas.edu> wrote: > Cyberkatru wrote: > > MIT physicist/astronomer is quite fun to what lecture. In lectures 20 > > and 16 of his online basic E&M lectures he makes some interesting > > points. ... > > >http://ocw.mit.edu/NR/rdonlyres/Phys...andMagnetism... > > > So do we agree with him on this bit of basic physics 101?There is most certainly a widespead misconception here, but unfortunately > this work does not find the real problem. > > The fundamental misconception is the identification between the electrostatic > potential of Maxwell's theory and the circuittheory concept of voltage, which > for reasons explained below should always be called "node voltage." > We all understand that the Maxwell quantity is defined as limit of U(r)/Q > as Q approaches zero, and U(r) is the potential energy of an infinitesmal test > charge Q. The circuittheory voltage is actually () the electrochemical > potential for mobile electrons within a chunk of matter that contains enough > electrons to be considered an electron reservior, and that is in a local > quasiequilibrium. This chunk of (almost always metallic) matter is represented > in circuit space as a "node" in the circuit graph. > > The minus sign in the above definition of voltage is a result of the negative > charge on the electron. (Benjamin Franklin had a even chance, but unfortunately > he made the wrong choice.) The electrochemical potential for electrons is > known in semiconductor device theory as the (quasi)Fermi level, not to be > confused with other meanings of "Fermi energy" in solidstate physics. This is > the quantity that voltmeters measure, as pointed out by William Shockley > (Electrons and Holes in Semicnoductors, 1950, p. 305). To understand this, > let's assume that the voltmeter is a classic Wheatstone Bridge, consisting of > an ammeter connected between the unknown voltage node and an adjustable voltage > standard (which can be just a battery connected to a potentiometer). The > measurement procedure is to adjust the voltage standard until the current > through the ammeter reads zero. The condition for zero (electron) current is > that the electrochemical potentials for electrons on the two sides of the meter > must be equal. > > When we understand node voltage to be this statistical quantity, all the > Krichoff's Law paradoxes disappear. The potential energy of the electrons in > the node can be raised or lowered by the electrostatic potential phi, or by > dA/dt: the voltage is still a scalar quantity. Its gradient is NOT in general > equal to the electric field E. > > Why has the confusion between these quantities persisted for so long? The first > problem is that they are measured in the same units (Volts). The second reason > is that > within a given metallic material < the voltage and the electrostatic > potential are coupled by an additive constant, the work function. Connecting > two metals with different work functions produces a structure with a constant > voltage, but with a discontinuity in the electrostatic potential, which is > known as a "contact potential." > > The additive linkage between potential and voltage is obviously broken in > vacuum, and also in most semiconductor device structures. A very little known > fact is that a difference in the spatial behavior of the potential and of > the voltage is a necessary condition for the operation of active (i.e. gain > producing) electron devices. > >  Bill Frensley 


#5
Jan1207, 05:00 AM

P: n/a

Chris H. Fleming wrote: > > > I don't get the last paragraph of the first page. > > "There is no electric field in this loop if the resistance of the wire > making up the loop is zero." > > Then he says. > > "(this may bother you  if so, see the next section)" > > Well I read on and it didn't stop bothering me. Remember that ideal capacitors have zero resistance in their plates and terminals, just as ideal inductors have zero resistance in their windings. If you apply a voltage across the terminals of an ideal inductor, the value of current will start changing continuously, and it would go to infinity if you wanted to wait for infinite time to pass. A perfectly conductive loop is not a short circuit, instead it is a smallvalue inductor. > When he is explaining > this on page 4, in paragraph 3 he sets all the currents at every point > in the loop to be the same, but in the very next paragraph he > contradicts himself with charge buildups at certain points on the loop. He's just simplifying things by ignoring the chargebuildup process. The circuit will "adjust itself" over a very brief time, creating surface charges and patterns of potential. After this "transient" has occurred, things will work as he describes. In other words, he might be describing what happens over a period of many milliseconds, while ignoring the changes which occur in a range of nanoseconds. A more complete explanation would have to include ALL changes at ALL time scales, including the currents which led to the charge buildups. Here's another cool article which goes into similar problems, similar in that most books ignore it (but it's about flashlight circuits, not about induced currents in conductor loops:) Chabay/Sherwood, A unified treatment of electrostatics and circuits http://www4.ncsu.edu/%7Erwchabay/mi/circuit.pdf ((((((((((((((((((((((( ( ( (o) ) ) ))))))))))))))))))))))) William J. Beaty Research Engineer beaty a chem washington edu UW Chem Dept, Bagley Hall RM74 billb a eskimo com Box 351700, Seattle, WA 981951700 ph4252225066 http://staff.washington.edu/wbeaty/ 


#6
Jan1407, 05:00 AM

P: n/a

I don't get the last paragraph of the first page.
> > "There is no electric field in this loop if the resistance of the wire > making up the loop is zero." > He is idealizing isn't he. If the wire is really a perfect conductor why would force be needed to keep them moving along. It is like no friction. Not the case for a real wire but so what? Its an adealization where all the resitance is put into the resistor. 


#7
Jan1407, 05:00 AM

P: n/a

Cyberkatru wrote:
> MIT physicist/astronomer is quite fun to what lecture. In lectures 20 > and 16 of his online basic E&M lectures he makes some interesting > points. I will tell you about both of them and ask your opinions but > notice that the second one I mention (which actually comes first in > lecture 18) is the most interesting and maybe a bit puzzling so be sure > to get through this whole post. > > First from lecture 20: > At some point a little past the 8 minute mark in Walter Lewin's basic > E&M video lecture #20 he makes the following statement: > "Almost every college physics book does this wrong!" > He is referring to the use of Kirchoff's voltage rule for loops in > cicuits with self inductors. > I think he is right about this. > In a way he is denying the assumptions of the socalled "lumped matter > discipline" in electrical engineeringwith regard to inductors > anyway. > > It is all quite entertaining. You can see the video here starting at > about the 8 minute mark of lecture number 20. > .http://ocw.mit.edu/OcwWeb/Physics/8...ures/index.htm > > He also has a write up of this issue here: > > http://ocw.mit.edu/NR/rdonlyres/Phys...0/lecsup41.pdf > > So do we agree with him on this bit of basic physics 101? > > Now on to the more interesting case of "nonconservative" fields and > wacky voltmeters: > This starts at about minute 38 in video lecture 16 when he commands to > "hold onto your hats!". A few minutes later gives a demonstation of > this where he says professors visiting in the audience refused to > believe what they were seeing. > > Do you agree with Walter's interpretation of what is happening? > > I think I basically agree but frankly I am not sure I like his > insistance that this is a case of what should be called > "nonconservative" fields (except in some unusual sense of > nonconservative). Here is why: > > Mathematically, the field depends on time and so is now properly a > field on spacetime. Now a closed loop in spacetime should return not > only to the same place but also the same time. So he has not really > demonstrated that the field is nonconservative in the usual > mathematical sense (cohomology). He has just used a path that didn't > really return to the same point in the manifold (spacetime). Electrical engineers would note that "inductive" reactance isn't the same as "capacitive" reactance. When an antenna is resonant, that is, its length equals the fomula electrical length, only ohmic and radiation resistance are present. Only when the antenna is NOT resonant is reactance present. If an antenna is too long for a given frequency, it is daid to have "inductive" reactance... if it's too short the reactance is "capacitive". Power lost in reactance isn't lost in the same way it's lost in ohmic resistance. Re the "coupling" matter... technically a radiating antenna is "coupled" to its surrounding media when it's working correctly (i.e. is resonant). 


#8
Jan1407, 05:00 AM

P: n/a

Cyberkatru wrote:
> But Bill, he actually uses the voltmeter to make the point. He moves > the voltmeter without changing where it is connected and gets a > different reading? This is puzzling if there is a real scalar function > (node voltage) along the wire whose changes around a loop add to zero. > This demonstration is in lecture 16. He actually used the physical > voltmeter to show that the changes around the loop don't add to zero. (I looked at the video  despite the best efforts of RealPlayer to keep me from doing so. I see what you are talking about.) As with any magic trick, there is a misdirection right at the start. In this case, the misdirection is in leading you to believe that different points along a wire represent the same circuit node. In the absence of timevarying magnetic fields, they should be, but the point of Faraday induction is that this is no longer true when the B field changes. To treat this case with circuit theory, we have to make it a distributed circuit, and two points along the wire are no more the same circuit node than two points along the center conductor of a coaxial cable (transmission line) are. Or, for that matter, the two ends of the wire coming out of an inductor coil. We would analyze the situation the same way we analyze a transmission line: find the equivalent circuit model for a differential length along the wire. In this case there will a resistive element R0 dx where R0 is the resistance per unit length. In series with this will be the Faraday generator, a voltage source of magnitude dA/dt (dot) dx, A being the vector potential. To include the possibility that the wires are moving in the magnetic field, we should probably take the time derivative of the whole dot product. If we're simply looking at a closed wire loop with induced eddy curent, the Faraday generator raises the voltage a dV and the resistor immediately drops it dV, leading to V(x) looking like a differential sawtooth pattern until we take the limit dx to 0, where it smooths out into a constant. For the case Lewin does, the series resistors limit the current in the loop to a value much lower than the full eddy current allowed by the wire. Thus, the Faraday generators can add up to a macroscopic voltage along the length of the wires connecting the two resistors. Despite what the circuit schematic might imply, the two resistors are in no way connected to the same two circuit nodes. If you model the circuit correctly by taking into account the Faraday generation in the wires, there is no violation of Kirchoff's Voltage Law. Note that you may also need to include induction in the voltmeter leads too. This explanation in no way contradicts what Lewin writes in his supplement, about charge redistribution in the circuit. The nice thing about circuit theory is that it automatically takes this into account, incorporating such things as the buildup of charges on the contacts to a resistor as required to create the voltage drop across that resistor. > > Another thing that bothers me with your explanation is that you said > that the node voltage and the electrostatic potential are related by a > constant. How could an additive constant make a difference here? KVL > wouldn't be affected by an additive constant would it? Note that I said they were connected by a constant within a metal of a given composition. In other conductive media, there are more complicated correction factors. (Again, multiply everything that follows by that pesky minus sign due to the electron charge.) In semiconductors, voltage (Fermi level) is equal to the potential energy for an electron at rest (the conduction band edge energy) + kT ln (n/Nc), where n is the electron concentration and Nc is a constant whose origin we don't need to worry about. Now, k ln (n/Nc) is just the entropy of the electron gas, so we see that the voltage can be interpreted as the Helmholtz free energy per electron. In dilute electrolytes, you have a similar situation, resulting in the Nernst equation. In metals, the electrons are so highly degenerate that we can ignore the entropy correction. Since the voltage is a measure of free energy, the normal relations like P=VI really do measure do measure the available energy (or power). Thermodynamics have already been taken into account. > > You might have the right explanation but it still not clear to me. It > is almost as if you are saying that the line integral \int E \dot dl is > not the relevant quantity for circuites. But this seems inconsitent > with Jackson's book. I think I ansered that above. E dot dl is very > relevant, but it's not the only thing that needs to be considered. > Is your explanation written in and advanced E&M texts? In my experience E&M authors have very little interest in the grubby details of what happens within technological materials. It will certainly be in a textbook in the future, since I am working on one on the topic of electron devices. This is obviously sufficiently confusing that it is worth a section or perhaps an appendix. (By the way, my mail server seems to be having problems, because I have not seen my post show up yet. If anyone else responded to my post please email me directly.)  Bill Frensley 


#9
Jan1407, 05:00 AM

P: n/a

Cyberkatru wrote:
> Now on to the more interesting case of "nonconservative" fields and > wacky voltmeters: > This starts at about minute 38 in video lecture 16 when he commands to > "hold onto your hats!". A few minutes later gives a demonstation of > this where he says professors visiting in the audience refused to > believe what they were seeing. > > Do you agree with Walter's interpretation of what is happening? All these phenomena are expected when you're working in an environment having an intense alternating bfield. If you measure voltages with your DVM, and short the leads together, it won't read zero unless you also twist the leads to form a shielded cable. Any closed loop of conductor will experience an electric current unless the area enclosed by the loop is zero. > I think I basically agree but frankly I am not sure I like his > insistance that this is a case of what should be called > "nonconservative" fields (except in some unusual sense of > nonconservative). Here is why: Rather than "nonconservative," I've heard this described as "circuits in multiplyconnected regions," as is done by the authors here: Faraday's law in a multiplyconnected region http://adsabs.harvard.edu/abs/1982AmJPh..50.1089R ((((((((((((((((((((((( ( ( (o) ) ) ))))))))))))))))))))))) William J. Beaty Research Engineer beaty a chem washington edu UW Chem Dept, Bagley Hall RM74 billb a eskimo com Box 351700, Seattle, WA 981951700 ph4252225066 http://staff.washington.edu/wbeaty/ 


#10
Jan1607, 05:00 AM

P: n/a

Cyberkatru wrote:
> MIT physicist/astronomer is quite fun to what lecture. In lectures 20 > and 16 of his online basic E&M lectures he makes some interesting > points. I will tell you about both of them and ask your opinions but > notice that the second one I mention (which actually comes first in > lecture 18) is the most interesting and maybe a bit puzzling so be sure > to get through this whole post. > > First from lecture 20: > At some point a little past the 8 minute mark in Walter Lewin's basic > E&M video lecture #20 he makes the following statement: > "Almost every college physics book does this wrong!" > He is referring to the use of Kirchoff's voltage rule for loops in > cicuits with self inductors. > I think he is right about this. > In a way he is denying the assumptions of the socalled "lumped matter > discipline" in electrical engineeringwith regard to inductors > anyway. > > It is all quite entertaining. You can see the video here starting at > about the 8 minute mark of lecture number 20. > .http://ocw.mit.edu/OcwWeb/Physics/8...ures/index.htm > > He also has a write up of this issue here: > > http://ocw.mit.edu/NR/rdonlyres/Phys...0/lecsup41.pdf > > So do we agree with him on this bit of basic physics 101? > > Now on to the more interesting case of "nonconservative" fields and > wacky voltmeters: > This starts at about minute 38 in video lecture 16 when he commands to > "hold onto your hats!". A few minutes later gives a demonstation of > this where he says professors visiting in the audience refused to > believe what they were seeing. > > Do you agree with Walter's interpretation of what is happening? > > I think I basically agree but frankly I am not sure I like his > insistance that this is a case of what should be called > "nonconservative" fields (except in some unusual sense of > nonconservative). Here is why: > > Mathematically, the field depends on time and so is now properly a > field on spacetime. Now a closed loop in spacetime should return not > only to the same place but also the same time. So he has not really > demonstrated that the field is nonconservative in the usual > mathematical sense (cohomology). He has just used a path that didn't > really return to the same point in the manifold (spacetime). Is there is a fundamental problem in the way he has used Faraday's Law? He traverses the circuit to calculate the total E_dot_dl, even though E has been modified by the circuit rather than being simply E due just to the changing B. Jason Walters. 


#11
Jan1807, 05:00 AM

P: n/a

> > Is there is a fundamental problem in the way he has used Faraday's Law? > He traverses the circuit to calculate the total E_dot_dl, even though E > has been modified by the circuit rather than being simply E due just to > the changing B. > > Jason Walters. I don't think so. He is just using the usual idealized assumtion that the wires themselves have zero resistence and that the resistance is concentrated in the resistor. 


#12
Jan1907, 05:00 AM

P: n/a

Cyberkatru wrote: > > > > Is there is a fundamental problem in the way he has used Faraday's Law? > > He traverses the circuit to calculate the total E_dot_dl, even though E > > has been modified by the circuit rather than being simply E due just to > > the changing B. > > > > Jason Walters. > > I don't think so. He is just using the usual idealized assumtion that > the wires themselves have zero resistence and that the resistance is > concentrated in the resistor. Take a region with a resistance R(x,y,z) where there is changing magnetic flux. How do you know that integral_loop[E.dl] = v is independent of R(x,y,z)? You seem to be saying that it doesn't matter if the path is empty space or a resistance in series with a conductor of 0 resistance. 


#13
Jan1907, 05:00 AM

P: n/a

Cyberkatru:
> [About the quantity integral_loop[E.dl] = v] you seem to be saying > that it doesn't matter if the path is empty space or a resistance in > series with a conductor of 0 resistance. Indeed there is something nontrivial happening there. This is why so many papers have adressed the question "What does a voltmeter measure?". The answer is, "the integral of electric field E along the (open) path traced out by the leads, from one contact point to the other". (This path includes the part across the voltmeter's body proper.) Remarkably, this integral is *the same* in the two very different situations A (after connecting the voltmeter and threads), and B (before doing that, so the path is just a geometric curve, not yet materialized by the leadsandvoltmeter apparatus. The difference between A and B lies in which parts of the path contribute to the integral: the whole path in B, only the small part of it inside the voltmeter in A. Yet the integral is the same in both cases. So you are right: as regards this integral, "it doesn't matter...", etc., surprising as this may appear. This is why voltmeters are useful: via this integral, which we may conveniently call "the emf along p (the path)", they give information about what the electric field *was* in situation B (which is of course what one is interested about) in spite of the considerable disturbance to the electric field caused by placing the voltmeter and its connectors. This equality between the two integrals, emf_A and emf_B, is easily proved by applying Faraday's law, under the assumption that the current derived along path p across the voltmeter is negligible, which happens because of two things: (1) The high internal resistance of the voltmeter, (2) The high conductance of the connectors. Faraday's law requires a *closed* path p', which one defines as p itself plus some "return path" inside the workpiece to which the leads ends are applied. The proof is a simple exercise, if one replaces "high" in (1)(2) by "infinite", and if one assumes a negligible radius for the connectors (so that p is welldefined). Of course, "negligible radius" and "high conductance" are antinomic, so the mathematically minded will find a rigorous proof somewhat challenging. It involves an asymptotic analysis with respect to two competing small parameters (radius and resistivity of the leads). A last comment, since the famous paper by Romer, R.H. Romer: "What do 'voltmeters' measure? Faraday's law in a multiply connected region", Am. J. Phys., 50, 12 (1982), pp. 108993 has been cited in this thread: It should be stressed that "multiple connectedness" is an artefact of the 2D modelling adopted by Romer for his discussion; topological issues of this kind play no role in 3D situations. 


#14
Jan2207, 05:00 AM

P: n/a

Cyberkatru wrote:
[..] > He is referring to the use of Kirchoff's voltage rule for loops in > cicuits with self inductors. > I think he is right about this. > In a way he is denying the assumptions of the socalled "lumped matter > discipline" in electrical engineeringwith regard to inductors > anyway. You might want to check out this circuit diagram on my website: http://www.xs4all.nl/~westy31/Electr...well_animation In each closed loop of a circuit, I put a 'mesh inductance'. This modifies Kirchhoff's law to include the inductance of the loop. One of the motivations behind this is to understand the transition from 'lumped' models, such as inductors, resistor, and 'continuous' models, such as wire loops with finite surfaces. As far as I can see, you could add macroscopic resistors, capacitors and even (although my confidence is slightly lower for that one) inductors to this circuit, and continue calculating all quantities in the usual way. Gerard 


Register to reply 
Related Discussions  
All Is Not Well For Walter Reed Outpatients  Current Events  10  
MIT's GUI_Missile_Flyout  Mechanical Engineering  1  
CSM Physics Harder than MIT's?  Academic Guidance  5  
Spelling surprises  General Discussion  28  
Snacks and surprises (without the surprises)  General Discussion  78 