Two students recorded rain data during a storm using different meteorological instruments. F(t) describes the total rainfall, in inches, t hours after the start of the storm. Student1: F(0) = 0; F(1) = 0.3; F(2) = 0.5 Student2: F'(0) = 0.6; F'(1) = 0.7; F'(2) = 0.3 1. Assuming all data is correct, during the second hour of the storm, it was raining at a rate of 0.7 inches per hour. The answer is false, but why isn't it true? Looking at student2's data, we see that F'(1) = 0.7, which essentially means that after the first hour of the storm (which would be during the second hour), rain was falling at 0.7 inches/hour, right? 2. Assuming all data is correct, during the first hour of the storm rainfall slowed down and later sped up. The answer is true, but I don't really understand why. It seems to me like the rate at which rain is falling actually stays constant or even decreases during the first hour. Any thoughts? 3. Either student1's instrument or student2's instrument must be incorrect, because they give different values for F'(0) and F'(1). I would think the answer is true, because the rate at which the rain is falling shouldn't be different with different measurements, but the answer is false. Is it because rainfall isn't something that can be measured accurately, so there's bound to be some differentiation between the two? Thanks guys.