I understand how to derive the divergence in polar. Ok, so I have the formula. But what I am confused about is this: say u=(0,0,w(r)), so as you can see the third component of this vector u is in fact a function of r. If I plug this into the polar divergence formula, I get zero, fine. But, if I attempt to 're-derive' the divergence in polar, using this vector and not a 'general' vector, I don't get zero; del=e_r*d/dr+1/r*e_theta*d/dtheta+e_z*d/dz and u here = e_r*0+e_theta*0+e_z*w(r). Normally when we do the dot product we ignore the bases but in polar obviously we have to technically expand everything out as a 'foiled' multiplication like this. The result here is that the derivative of the last component, with respect to r, is not 0. So, WHY in the derivation for polar divergence in the first place, do we 'assume' that u_r and u_theta and u_z are functions only of their respective subscripts??