In order to explain experimental results, it's important to understand what's being measured and also have a good idea about how those quantities are measured.
So, before trying to answer your basic question, I wanted to get some sense of what all was involved.
Having worked with these systems which use a computer to monitor and sometimes control an experiment, I now think I have a good sense of what you were monitoring.
The computer was monitoring the voltage across the resistor and also was monitoring the voltage across the capacitor. Knowing the resistance of the resistor, the voltage across the resistor could be converted to current via Ohm's Law. Then the current was integrated by the computer to find the charge on the capacitor - no doubt, assuming the capacitor started out in an uncharged state. The computer could then produce a Charge vs Voltage graph for the capacitor. This graph should be linear, the slope being the capacitance of the capacitor.
Why was this set up so the power supply voltage was linear in time? So that current current would flow at a reasonable rate throughout the data collection time - not be too large at the beginning - not tapering off too much over time.
Now for your basic question, which I think is: Why was the capacitance that was determined by the experiment, 2262μF, larger than the actual capacitance, 2200μF ?
Components like resistors and capacitors are manufactured to a given tolerance. Your capacitance measurement is a little more than 3% higher than the specified capacitance. Not bad! Moreover, unless you measured the resistance of the resistor, that's probably also in error. If its resistance was higher than the 10kΩ assumed in the computer program, the current and thus the charge would have been less than that computed, so that the capacitance might actually be less than 2262μF, perhaps even less than 2200μF.