Initially I was wondering how the weather data is analyzed.
It would seem to be that in some respects the temperature data on one day is dependent upon the weather data from the previous day - ie if it was X temperature ( average ) on day 1 it is most likely to be similar on day 2; much like if it is X temperature at noon, at 1 PM the temperature should be closer to X rather than far from X.
Nevertheless, such trending effects though should be minimized the farther apart in time the records has been taken, and more independence can be expected. For example, the effect of Day 1, Year 1 on Day 1 year 2, can be expected to be lessor than the effect of Day 1 on Day 2, of the same Year 1. By effect I would mean all variables that come into play for changes in temperature.
With a large enough data set, one can be analyze by using the difference in temperature from one hour to the next, or day to day, month to month, year to year, citing averages and standard deviations the case may be. Extreme temperature changes can then be ( possibly ) ruled out ( or in ) as being outcasts or anomalies if they fall within a requirement of 3 or 4 standard deviations. Three standard deviations should include 99.74% of data for a normal distribution.
Temperature data certainly looks like a normal distribution. You can do one simple test on your data - the summation of frequency versus the range of values should be a straight line. Some skew may be evident ( always is, even for such things as checking for defective bolts, as our sample size is limited and not of the whole population - but that is what statistics is all about, dealing with a sample of a whole population ). How much skew from a normal distribution would be another analysis. Would the Chi squared test be useful, or perhaps some other tests are more pertinent?
I would thus assume a normal distribution, and proceed with a statistical analysis. But definitely take note how much skew is evident.
This is from 2002.
http://naldc.nal.usda.gov/download/18988/PDF