# Distributions of temperature

1. Are the temperature over 24 hours normally distributed?
1. Over 1 year?
2. Over 15 years?
3. Is there a difference in distribution depending on the time span
2. Are MIN temperatures per day i.e. the coldest temperature measured over a 24hr period normally distributed?
1. Over one month
2. Over one year
3. Over 15 years?
4. Is there a difference in distribution depending on the time span?

256bits
Gold Member
1. Are the temperature over 24 hours normally distributed?
1. Over 1 year?
2. Over 15 years?
3. Is there a difference in distribution depending on the time span
2. Are MIN temperatures per day i.e. the coldest temperature measured over a 24hr period normally distributed?
1. Over one month
2. Over one year
3. Over 15 years?
4. Is there a difference in distribution depending on the time span?
Not sure what you are asking..

By "normally distributed", are you asking about statistical analysis of the recorded temperatures for a location, with a mean, mode, standard deviation?
Or , since you mention year and 15 years, perhaps you are asking about how the weatherman gets his numbers when he/she says " the daily average for today is XX degrees?"

Maybe some context on what led you to ask the question.

Not sure what you are asking..

By "normally distributed", are you asking about statistical analysis of the recorded temperatures for a location, with a mean, mode, standard deviation?
Or , since you mention year and 15 years, perhaps you are asking about how the weatherman gets his numbers when he/she says " the daily average for today is XX degrees?"

Maybe some context on what led you to ask the question.

Hmm thanks, you are probably right, I havent phrased the question right. Forget about the first question, but let me try to phrase a problem around the second one better.

Let's say Eric the Eskimoe lives in a city in Greenland, where the weather is being measured in a station every day. Every day they record the minimum temperature and the maximum. Then they will take the minimum (and maximum, but we're interested in minimums for this example) and calculate the average minimum temperature per month. Basically taking all MINs of all days, summing them up, and dividing them by the number of days.

This is being measured month after month year after year.

So then you calculate the mean, and distribution of the average min temp, and assign it a normal distribution, treating the mean MIN temp as a stochastic variable with normal distribution. you'd get someething like N(-20,5) i.e. a mean of -20 degrees and std dev of 5 degrees. But this is the mean of the average MIN per month. So the question was related to weather you can assume normal distribution here.

Furthermore, we are interested in knowing how often eskimoes experience temperatures below -35 for instance. Meaning we look at Eric the eskimoe and all his ekimoe friends living all over greenland and wherever they live, they each have their own weatherstation that is closest to their village, with similar data for which we have the normal distribution, mean and std deviation for each location.

So I'm a bit struggeling with how to phrase my final question here but, for the population of eskimoes, let's say they are 5000 of them in 10 different villages, we'd want to know:
- what is the probability of anyone of them experiencing temperatures below a certain degree. I think i can calculate this P(X<-35) and looking up in z-score table. the probability will be different in different locations, since each location have different distribution. So you can get the probability by location basically. this is though still mean MIN temperature

But given this differences in probability by location, is there any way of saying how many in the total population of 5000 that will experience -35
and OR - how often ?

I guess you could say that with a probability of like 90% probability (or whatever probability you want to use) THIS many eskimoes will have experienced -35 in any given year, just by taking the 0,9 probability, the mean and std dev, and then calculating for each location, given its distribution at what temperature that 90% probability is, and then just sum all locations and eskimoes up together?

I know im not explaining it very clear im still struggling to phrase it but hope you understand what I am getting at

256bits
Gold Member
That's a good explanation.
Better people here than me that know a lot more about statistical analysis of this sort. Hopefully someone chimes in.
I'll give it some thought and try to get back to you.

I have come to the conclusion myself that if data is measured monthly the min temperatures will not be normal distributed. it would be a bimodal distribution. however if you'd only have data from winter and summer respectively, they are reasonably normally distributed however a bit skewed. Anyone knows how to calculate probabilities from a skewed normal distribution??

jim mcnamara
Mentor
First off, say for Albuquerque New Mexico, NOAA and other agencies rework and republish daily temperature averages every 10 years, because decadal "normal" temperatures are not normally distributed, decade to decade because of trending. Monthly temperatures have the same issues - trending. Example: Because daily averages within a month trend upward or downward in temperate regions the readings (especially in months near the equinoxes), monthly data should not really be considered part of Gaussian distribution. Because of the decadal changes you would see that April 21 temps data over a period of years also displays trends.
This does not mean you cannot calculate mean temperatures by month over the period of a year. It is just that assuming the data is normally distributed may not be a good assumption.

Try:
https://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php
NOAA also has tons microclimatic data - daily temps including max/min and hourly temps for locations all over the US. Other agencies elsewhere maintain similar archives. Have fun.

First off, say for Albuquerque New Mexico, NOAA and other agencies rework and republish daily temperature averages every 10 years, because decadal "normal" temperatures are not normally distributed, decade to decade because of trending. Monthly temperatures have the same issues - trending. Example: Because daily averages within a month trend upward or downward in temperate regions the readings (especially in months near the equinoxes), monthly data should not really be considered part of Gaussian distribution. Because of the decadal changes you would see that April 21 temps data over a period of years also displays trends.
This does not mean you cannot calculate mean temperatures by month over the period of a year. It is just that assuming the data is normally distributed may not be a good assumption.

Try:
https://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php
NOAA also has tons microclimatic data - daily temps including max/min and hourly temps for locations all over the US. Other agencies elsewhere maintain similar archives. Have fun.

I think I understand your point and I sort of was on that track myself after looking at my data. However the question remains, if you cannot model the data as a normal distribution, is there any other way of calculating probabilities in a way that you can with a normal distribution? i.e. if i want to know the probability of the temperature dropping below a certain degree at a location, how would i calculate that probability without assuming any distribution?

jim mcnamara
Mentor
I do not know. I believe there are stat methods to handle predictions based on trending data. Perhaps @Stephen Tashi or @chiro would have a suggestion.

256bits
Gold Member
I think I understand your point and I sort of was on that track myself after looking at my data. However the question remains, if you cannot model the data as a normal distribution, is there any other way of calculating probabilities in a way that you can with a normal distribution? i.e. if i want to know the probability of the temperature dropping below a certain degree at a location, how would i calculate that probability without assuming any distribution?
Initially I was wondering how the weather data is analyzed.

It would seem to be that in some respects the temperature data on one day is dependent upon the weather data from the previous day - ie if it was X temperature ( average ) on day 1 it is most likely to be similar on day 2; much like if it is X temperature at noon, at 1 PM the temperature should be closer to X rather than far from X.

Nevertheless, such trending effects though should be minimized the farther apart in time the records has been taken, and more independence can be expected. For example, the effect of Day 1, Year 1 on Day 1 year 2, can be expected to be lessor than the effect of Day 1 on Day 2, of the same Year 1. By effect I would mean all variables that come into play for changes in temperature.

With a large enough data set, one can be analyze by using the difference in temperature from one hour to the next, or day to day, month to month, year to year, citing averages and standard deviations the case may be. Extreme temperature changes can then be ( possibly ) ruled out ( or in ) as being outcasts or anomalies if they fall within a requirement of 3 or 4 standard deviations. Three standard deviations should include 99.74% of data for a normal distribution.

Temperature data certainly looks like a normal distribution. You can do one simple test on your data - the summation of frequency versus the range of values should be a straight line. Some skew may be evident ( always is, even for such things as checking for defective bolts, as our sample size is limited and not of the whole population - but that is what statistics is all about, dealing with a sample of a whole population ). How much skew from a normal distribution would be another analysis. Would the Chi squared test be useful, or perhaps some other tests are more pertinent?

I would thus assume a normal distribution, and proceed with a statistical analysis. But definitely take note how much skew is evident.

This is from 2002.

Last edited:
Stephen Tashi
i.e. if i want to know the probability of the temperature dropping below a certain degree at a location, how would i calculate that probability without assuming any distribution?

To think about the probability of an event, you must define the set of possible events precisely and you must define how the set of events is sampled.

What is your definition of the event: "the temperature dropping below a certain degree at a location"? How do I take a sample of such an event? For example, do I pick a specific temperature (like 40 C) and pick a specific location (like Tucson AZ) and then pick a day at random (such as a random date picked from 2017) ?

Applying mathematical concept of the probability requires that you define a set of "events" and then define how a sample is taken from the set. An individual event is a definite result (e.g. "the coin lands heads" or "the minimum temperature for Tucson AZ on Dec. 13, 1915 was 43 F") Taking a sample of an event requires some process (natural or man-made) that introduces randomness. (e.g. "toss the coin" , "pick a date at random").

jim mcnamara
Initially I was wondering how the weather data is analyzed.

It would seem to be that in some respects the temperature data on one day is dependent upon the weather data from the previous day - ie if it was X temperature ( average ) on day 1 it is most likely to be similar on day 2; much like if it is X temperature at noon, at 1 PM the temperature should be closer to X rather than far from X.

Nevertheless, such trending effects though should be minimized the farther apart in time the records has been taken, and more independence can be expected. For example, the effect of Day 1, Year 1 on Day 1 year 2, can be expected to be lessor than the effect of Day 1 on Day 2, of the same Year 1. By effect I would mean all variables that come into play for changes in temperature.

With a large enough data set, one can be analyze by using the difference in temperature from one hour to the next, or day to day, month to month, year to year, citing averages and standard deviations the case may be. Extreme temperature changes can then be ( possibly ) ruled out ( or in ) as being outcasts or anomalies if they fall within a requirement of 3 or 4 standard deviations. Three standard deviations should include 99.74% of data for a normal distribution.

Temperature data certainly looks like a normal distribution. You can do one simple test on your data - the summation of frequency versus the range of values should be a straight line. Some skew may be evident ( always is, even for such things as checking for defective bolts, as our sample size is limited and not of the whole population - but that is what statistics is all about, dealing with a sample of a whole population ). How much skew from a normal distribution would be another analysis. Would the Chi squared test be useful, or perhaps some other tests are more pertinent?

I would thus assume a normal distribution, and proceed with a statistical analysis. But definitely take note how much skew is evident.

This is from 2002.

OK thanks. I'm gonna go with using a normal distribution even though it's a bit skewed. Data is measured each month i believe, and thus the distribution would represent the probabilities of getting a certain MIN temperature for a month (?). As you say, when it is measured is important, and would also change what the distribution is modelling right.
So in doing this. I can for instance calculate what is the probability of an eskimoe in a certain location experiencing a month with a MIN temperature of let's say -20 or less.

However, looking at 3 different eskimoes in 3 different locations (i.e. with their own normal distribution to model their local weather). Lets say

N(-15,10) => P 0,31 to experience -20 or less
N(-12,8) = 0,16
N(-16,5) = 0,21

But how can i answer "How many eskimoes will experience -20 or less in a given year? Maybe the question is not relevant or correctly phrased?

So since distribution represent months, I can multiply the probability with 12.

12*0,31=3,72 months
12*0,16=1,92 months
12*0,21=2,52 months

This would be the expected number of months per year they will have -20 or less(?). So you could say that all 3 will experience -20 on average more than 1 months per year. However only 2 would experience -20 more than 2 months per year. I don't know what can be said about the population as a whole with regards to experiencing a certain temperature?
Maybe just summing up the months of all eskimoes will say something useful of the population as a whole? in total tehy will experience 8,16 months. , which on average is 2,72 months per capita, is that a truthful statement? does it have any probability related to it, or its just the long term expected value?

Or could you turn it around and instead calculate with a probability of 95% say that the will experience this temperature somehow?
(using the function norminv in excel)

NormInv(0,05;-15;10)=-31 basically saying that 95% of all months will be down to -31, which includes -20...

I need to figure out a good way to say anything sensible about the temperature experience of the population as a whole. Any ideas?

Stephen Tashi
But how can i answer "How many eskimoes will experience -20 or less in a given year? Maybe the question is not relevant or correctly phrased?

It isn't correctly phrased. Consider this question "How many heads will occur in 10 tosses of a fair coin?". There is no specific number of heads that must occur. There might be zero heads or one head or two heads etc. There is a specific answer to the question "What is the expected number of heads that will occur in 10 tosses of a fair coin?"

In you problem, there may be a specific answer to the question "What is the expected number of eskimos that will experience a temperature of -20 or less in a randomly selected year?".

It isn't clear what calculations you are making with normal distributions. It isn't sufficient to know the distribution of temperatures at each location if there is a correlation between what happens at two locations. For example, at villages near each other, when one village gets cold, the other village might also tend to get cold.

It isn't clear what calculations you are making with normal distributions. It isn't sufficient to know the distribution of temperatures at each location if there is a correlation between what happens at two locations. For example, at villages near each other, when one village gets cold, the other village might also tend to get cold.

Ah yea that's true, but for simplification, if we assume that they are independent.

Or my other idea of summing up the expected number of months per year, you can get total number of months for the population (and calculate the average or median if you wish). so you can say that the eskimoe population will on average experience X number of months per year that are -20 or below. Or the median eskimoe will experience this many months.... it's like saying the average or median income of the US (?)

Hmm thanks, you are probably right, I havent phrased the question right. Forget about the first question, but let me try to phrase a problem around the second one better.

This is being measured month after month year after year.

Temperature is a number that determines the level of excitation in matter by the fourth power in units of W/m^2. When Watts is included we know that temperature is an average over a period of time not longer than a second. I consider it to be practically instantaneous even if a second not always can be considered to be a short amount of time.

When a change in Watts/m^2 is something that happens every second, I find it weird to use it for an average over months or years. It would be more appropriate to use kWh for example.

So then you calculate the mean, and distribution of the average min temp, and assign it a normal distribution, treating the mean MIN temp as a stochastic variable with normal distribution. you'd get someething like N(-20,5) i.e. a mean of -20 degrees and std dev of 5 degrees. But this is the mean of the average MIN per month.

I would say that the stochastic variable is everything in between the MIN and MAX, as they define the limits for stochastic variations.

And if we use the MAX and MIN as the definition of stochastic variation combined with the unit W/m^2, we get the stochastic variable per second, and that variation is quite small at any point in time. When considering stochastic variation over longer periods of time I think another unit than a value of change per second would be more appropriate. But if stochastic variations per second is small, I would guess that the same relationship would be found over longer periods of time. I think that such small variations would be an indication of a stable steady state in the system. Do you agree?

davenn
Gold Member
2021 Award
Temperature is a number that determines the level of excitation in matter by the fourth power in units of W/m^2. When Watts is included we know that temperature is an average over a period of time not longer than a second. I consider it to be practically instantaneous even if a second not always can be considered to be a short amount of time.

When a change in Watts/m^2 is something that happens every second, I find it weird to use it for an average over months or years. It would be more appropriate to use kWh for example.

I think you mis-understand the question, have another read. You don't even have to go into units of power as you have done

Dave