Hi guys, first time posting here, but I have a question that I have been thinking about for quite a while, and I hope someone can help out with it. Assume a line of charge (with overall charge of +Q and of length L) that is lying on the x-axis. You want to calculate the electric field strength E a distance away from the line of charge, yet along the same axis. To do this you make use of integration to find the electric field strength emitted by an infinite amount of infinitesimally small "point charges" alone this line of charge. This is denoted as dE since the field strength is also infinitesimally small at a given theoretical point that is a distance X away from the point of interest where we want to calculate E. When you calculate this you get the formula: dE=(k dQ)/(X^2). This is the small E emitted by a small point of charge dQ. The problem is that we cannot integrate that since it is our value for X that is changing, yet we have the differential of our charge. So, we need to convert dQ into dX. From books I have found that the way to do this is by equating the ratio to the charge density along the line of charge: Q/L. The result is: dQ/dX = Q/L. This is where I have my problem with the logic of this equality. While the charge is distributed over the length L, X is the length from the point of interest to the point charge, which is definitely not the same as L. I understand that the differential of something causes it to become infinitesimally small, but dividing dQ/dX is dividing 2 small numbers, hence still giving a potentially large fraction. Is anybody able to explain the logic behind this?