I'm completely boggled by this problem: "A 12.0V battery is used to charge a 4.00μF capacitor. A switch is then thrown disconnecting that capacitor from the battery and connecting it to a circuit with two capacitors (3.00uF and 6.00μF) connected in series. What are the final charges on the capacitors?" I'm not looking for the answer to the problem, just some nudges in the correct direction. The initial charge on the 4.00μF capacitor is easy enough... q = C*V = (4.00*10^-6)(12.0) = 48μC Next I assume that I'll need the voltage in the new circuit (the sans-battery one). The voltage can be obtained by assuming the charge from the old system will carry over and be shared with each capacitor in the new circuit: V(eq) = q/(C1 + C2 + C3) V(eq) = 0.48μC/(4.00μF + 3.00μF + 6.00μF) = 3.69V Now since the capacitors are all running in series: 1/C(eq) = 1/C1 + 1/C2 + 1/C3 = 1.33μF Which makes sense, since the capacitance of a series is less than the weakest capacitor in that series. Now this is where I run into troubles. Referencing the numbers I get for charges using a variety of methods do not agree with the answers in the back of the book. My methodology is flawed here in the final step and I'm not sure how. I figured that since I know the capacitance of each capacitor in the series now (1.33μF) and the voltage of each (3.69V) that multiplying the two should give me charge on each capacitor: q = CV q = 1.33μF * 3.69V = 4.92μC If anyone can possibly nudge me in the correct direction here without violating any of the forum rules I'd be much obliged.