Two point charges of magnitude 0.1 μC and opposite signs are placed in a vacuum on the x-axis at positions -/+ 1, respectively.
(a) Calculate the field intensity at the point (0, 1).
(b) Approximate the value of intensity at a point 10 cm away from one charge by ignoring the effect of the other charge and determine the percentage of error due to such an approximation.
(a) As in Fig. 1-8 (I drew it and attached it as TheFigure.jpg), the y components of the fields produced by each point charge cancel each other and their x components add, resulting in E = Qd/(4πϵr^3) a_x . With Q = 0.1 μC, d = 2 m, and r = sqrt(2) m, we find E = 636.4 a_x V/m.
(b) Each point charge dominates the field in its 10-cm vicinity, resulting in E ≈ +/- 180 a_r kV/m, were a_r is the radial unit vector with that charge as the center. The field is directed outward at x = -1 and inward at x = 1. The most error occurs when the test point is at (+/- 0.9, 0), with a relative percentage error of 100 × 1/(1 + 19^2) = 0.275%.
E = Qd/(4πϵr^3) a
The Attempt at a Solution
I get how to do part (a) but I'm stuck at part (b). I tried computing the electric field (affecting the positive test charge) at the point (0.9, 0) solely from the Q charge and then solely from the -Q charge and then taking the ratio of the values as follows:
E_close = 1797548.488 V/m (radius of 0.1 m)
E_far = 2465.77296 V/m (radius of 0.9 m)
(2465.77296 V/m)/(1797548.488 V/m)(100%) = 0.1372%
and, I don't know if it's a coincidence or not but, I noticed doubling the previous value yields 0.274% (and the correct answer, according to the book, is 0.275%.
Could someone please shed some light on this?
Any input would be greatly appreciated!
108.6 KB Views: 236