Since this hasn't gotten any replies yet, I'll discuss the "data" and "level of confidence" a bit by explaining an experiment I did with some high school teachers lately, which we used to discuss the same topics. In our case, the teachers were determining the density of an unknown oil by a buoyancy method, by putting pennies (of known mass) into a plastic cup floating in a measuring cup of oil (and measuring the volume change in the oil as the cup began to sink). Note the poor supplies here, so measurements were bound to carry much uncertainty. Basically, there are a few different ways to take data in order to reach conclusions.
First, You could just take one data point. Some teachers put a fair number of pennies in the cup, then measured the volume change (once). Because the uncertainties in these numbers were so high, some teachers ended up with values that were really low (about 0.7 g/cm^3) and some teachers ended up with numbers that were rather high (1.1 g/cm^3 or higher). Taken as individual measurements, they couldn't much be trusted because of the high likelihood of error, since the markings on the cup were pretty well spaced (every 10 mL only).
However, taking more measurements often can counter the effect of poor equipment (as long as the equipment is calibrated correctly and being used correctly -- for instance as long as you aren't looking at the English side of the cup instead fo the metric side and mistakenly reading teaspoons as mL; it's be ok if you used that side, but did a unit conversion). For instance, when all the teachers' measurements were AVERAGED, there was a much more accurate result, and the uncertainty ("precision") was also lower, because you could use the "standard deviation" as a measurement of error in the result.
An even better way to look at the problem was to take a NUMBER of measurements... say counting how many pennies it takes to displace 10mL of oil, then how many it takes to displace 20mL, 30mL, etc. This data can be plotted and fit to a straight line, and typically when you fit data you'll get a result for a slope and the error in that slope. That will give you a result that relates to the density of the oil. "Least squares fitting" used to be done by hand, but is now possible on common software such as Excel.
Even better ways of fitting data involve weighting the individual data points according to their individual error as the fit for the whole data set is calculated... which can be done in better scientific software like Origin.
In some of my research, I've programmed equipment to keep measuring certain things (such as counting photons per second reflected off a surface when a polarizing optic elsewhere in the system was at a certain angle) until a certain low error is reached in the standard deiviation... then to move on to the next programmed angle and do the same. I download the average and error for the photon count as a function of angle... then fit the data using the last method discussed above... so that I could get the lowest error in the parameter (variable) that I was looking for at the time (which was a material property of the surface.
I'm sure this give you a lot to think about and look up... but note that you can certainly take data using simple equipment and explore data analysis and error analysis on your own. (There are some great books out there on error analysis, in our lab we kept a copy of
https://www.amazon.com/dp/0072472278/?tag=pfamazon01-20)