I'm a games programmer who is writing a simple car physics simulation, and I've had a disagreement with the designer about a car's behaviour - specifically, its top speed - in different conditions. Imagine a car accelerating from a standing start to top speed along a perfectly straight, flat road. We know the car's mass, so we can calculate acceleration at any given moment using a = F/m ... To calculate the forces I'm assuming that the car is resisted by a drag force (proportional to the car's velocity squared), and rolling resistance (proportional to the car's velocity), and is accelerated forward by a tractive force. I have a function that returns a traction force for a given velocity. It's arrived at by slightly convoluted means and is meant to represent high torque in lower gears, decreasing as the car speeds up and the driver changes up through the gears, and it looks kind of like a graph of y = 1/x So the optimum tractive force that can be applied at any given moment is proportional to the coefficient of friction between the tyres and the road, right? For good quality tyres on dry asphalt we can say that the coefficient is 1.0. Assuming we plug in sensible values, we get a decent-looking acceleration. Now we re-run the simulation when the road is wet and the friction coefficient drops to 0.7 ... I think that that will scale the usable traction force at any moment to 70% of what it could be in dry conditions (more than that would cause the car to wheelspin), which means not only a slower acceleration but also a slower top speed. The designer agrees that acceleration would be slower but thinks that given a long enough road the car could eventually reach the same top speed in the wet as it can in the dry. Who is right? And if he is right, how can that be explained and simulated?