Is an infinite series of random numbers possible?

Loren Booda
Messages
3,108
Reaction score
4
Is an infinite series of [nonrepeating] random numbers possible?

That is, can the term "random" apply to a [nonrepeating] infinite series?

It seems to me that Cantor's logic might not allow the operation of [nonrepeating] randomization on a number line approaching infinity.
 
Physics news on Phys.org
I'm not sure of an answer, but allow me to put two thought-experiments on the table to discuss.

Experiment A: it will produce a rational approximation to a real in the interval (0,1]. Expressed in base 2, the number is constructed as follows: throw a fair coin, and choose the first binary digit (at position 2^-1) accordingly (maybe heads=1, tails=0). Each next coin throw will give you another digit. When you get tired, or consider that your approximation is good enough, stop throwing the coin and use the fraction so far. If you have two of these fractions and are worried that they may be equal, calculate extra digits for each number (with more coin throws) until they differ.

Experiment B: produces a random number, not uniformly distributed and possibly repeated (the latter could be solved by throwing away a repeated number and producing a new one), in the interval [0,pi). Proceed as in experiment A, but definitely terminate the experiment at the first 0 digit (tails); then return pi times your fraction.
 
I would think the proof would be very similar to "Are there infinite prime numbers?"
 
Define random.
Define infinite series. (I'm pretty sure you meant sequence here)
 
micromass said:
Define random.
Define infinite series. (I'm pretty sure you meant sequence here)

Thanks, I did mean sequence, and by "infinite" I intend for the count of random numbers in that sequence to approach infinity.

By "random" I mean eventually exhausting all relations among numbers.

__________


Is the cardinality of random numbers the same as that for reals?

Might random numbers by their nature tend toward infinity?
 
archangel95 said:
I would think the proof would be very similar to "Are there infinite prime numbers?"

Not entirely ^^

You see, Prime Numbers are NOT random. They are connected by the fact they can only be divided by themselves and one.

It's incredibly unlikely if a proof to either of these were found, that it'd be similar to prove the other one.
 
Loren Booda said:
Thanks, I did mean sequence, and by "infinite" I intend for the count of random numbers in that sequence to approach infinity.

By "random" I mean eventually exhausting all relations among numbers.

__________


Is the cardinality of random numbers the same as that for reals?

Might random numbers by their nature tend toward infinity?

If it is a sequence, by definition it will be countable. The numbers themselves can be confined to a given interval, since there are (non-countable) infinite numbers available.
 
Please let me amend my original questions:

Is an infinite set of exclusively non repeating random numbers possible?

Is "random" analogous to "exhausting all relations among numbers"?

__________


Is the cardinality of random numbers the same as that for reals?

Might random numbers by their nature tend toward infinity more rapidly than reals?
 
You have to define random. By the definition of an infinite sequence, there will always be some generating function that represents your sequence (even if it can only be expressed as a mapping). I suppose you could say that it is random if the generating function can't be expressed in terms of certain types of other functions, like elementary functions.

But until you specifically define random, by the reasoning that my choice of generating function is random, I can say that {2, 4, 6, ...} is random; because I thought of it randomly.
 
  • #10
Loren Booda said:
Please let me amend my original questions:

Is an infinite set of exclusively non repeating random numbers possible?

Is "random" analogous to "exhausting all relations among numbers"?

__________


Is the cardinality of random numbers the same as that for reals?

Might random numbers by their nature tend toward infinity more rapidly than reals?

What do you mean by a random number? Is 6 a random number? Is pi a random number?

Do you perhaps mean non-computable or non-definable number?
 
  • #11
TylerH said:
You have to define random. By the definition of an infinite sequence, there will always be some generating function that represents your sequence (even if it can only be expressed as a mapping). I suppose you could say that it is random if the generating function can't be expressed in terms of certain types of other functions, like elementary functions.

But until you specifically define random, by the reasoning that my choice of generating function is random, I can say that {2, 4, 6, ...} is random; because I thought of it randomly.

SteveL27 said:
What do you mean by a random number? Is 6 a random number? Is pi a random number?

Do you perhaps mean non-computable or non-definable number?

The set of random numbers exhausts integral order and corresponding numerical magnitude. Random numbers may be generated by exchanging orders with magnitudes.
 
  • #12
I know it's sound philosophical, but how would you generate a random sequence of numbers?

I mean if you want to 'generate' something, you'll have to use some apparatus (either your mind, computer, cesium atom) and you don't know what makes the apparatus pick its numbers.

I don't believe there's something like random numbers.
 
  • #13
Loren Booda said:
The set of random numbers exhausts integral order and corresponding numerical magnitude. Random numbers may be generated by exchanging orders with magnitudes.

Perhaps you question is, are there permutations of infinite length? (Just trying to guess your meaning, though.)
 
  • #14
MathematicalPhysicist said:
I don't believe there's something like random numbers.

Can random numbers be defined but not generated?

Dodo said:
Perhaps you question is, are there permutations of infinite length? (Just trying to guess your meaning, though.)

I'm not sure. How about the permutation between order and number which I mentioned previously?
 
  • #15
Loren Booda said:
The set of random numbers exhausts integral order and corresponding numerical magnitude. Random numbers may be generated by exchanging orders with magnitudes.

You're using some words in ways that are unfamiliar to me.

* You referred to "the set of random numbers," but I don't know what a random number is. Can you give some examples of random numbers? Is 6 a random number? Pi?

* What do you mean "exhausts integral order?"

* What do you mean by generating random numbers by "exchanging orders with magnitudes?" Can't imagine what that means. Order is the relation by which 5 < 6, for example. Magnitude is the relation that ignores the difference between -5 and 5. How do you exchange these properties? And how do use these ideas of order and magnitude to generate random numbers? Do you mean particular random numbers or the entire set of random numbers? And what set is that?

I know you have something in mind but it's difficult for me to understand what you mean.

Are you referring to numbers that are generated unpredictably? Flip a bit if a cosmic ray passes through a particular square millimeter in the next millisecond?

Or by "random" do you mean algorithmic randomness? A number is random it's incompressible via a finitely describable algorithm? So any number we can name is not random, but we know that there must be uncountably algorithmically random numbers. Those are the non-constructable or non-definable numbers. [There's a subtle difference between those two concepts?

Do any of those questions make sense in terms of what you're trying to do?
 
  • #16
SteveL27 said:
You're using some words in ways that are unfamiliar to me.

* You referred to "the set of random numbers," but I don't know what a random number is. Can you give some examples of random numbers? Is 6 a random number? Pi?

* What do you mean "exhausts integral order?"

* What do you mean by generating random numbers by "exchanging orders with magnitudes?" Can't imagine what that means. Order is the relation by which 5 < 6, for example. Magnitude is the relation that ignores the difference between -5 and 5. How do you exchange these properties? And how do use these ideas of order and magnitude to generate random numbers? Do you mean particular random numbers or the entire set of random numbers? And what set is that?

I know you have something in mind but it's difficult for me to understand what you mean.

Are you referring to numbers that are generated unpredictably? Flip a bit if a cosmic ray passes through a particular square millimeter in the next millisecond?

Or by "random" do you mean algorithmic randomness? A number is random it's incompressible via a finitely describable algorithm? So any number we can name is not random, but we know that there must be uncountably algorithmically random numbers. Those are the non-constructable or non-definable numbers. [There's a subtle difference between those two concepts?

Do any of those questions make sense in terms of what you're trying to do?

I appreciate your patience, SteveL27. Upon consideration, your arguments make a lot of sense.

In this regard, allow me to modify my previous speculations to concentrate on a specific random number generator:

1. Start with an exclusive irrational number.

2. Limit its fractional part in digits by its integer value.

3. The resulting string is a random number.
 
Last edited:
  • #17
Loren Booda said:
I appreciate your patience, SteveL27. Upon consideration, your arguments make a lot of sense.

In this regard, allow me to modify my previous speculations to concentrate on a specific random number generator:

1. Start with an exclusive irrational number.

2. Limit its fractional part in digits by its integer value.

3. The resulting string is a random number.

What is an "exclusive" irrational number. What does it mean to limit its part in digits by its integer value? Do you mean just take the part to the right of the decimal point?

Can you give an example?

Say I start with my favorite irrational, e, the base of the natural log. e = 2.7128...

Are you saying that .7128... is a random number? But it isn't, in either sense of the word.

* It's not unpredictable. In fact you can can write down the well-known algorithm e = 1 + 1 + 1/2 + 1/6 + 1/24 + 1/120 + ... + 1/n! + ... and crank out as many digits as you like. It's completely deterministic, the opposite of random.

* It's not algorithmically random, in fact the finite string (sum as n goes from 0 to infinity of 1/n!) is a small number of symbols that precisely defines e.

Is e an "exclusive" irrational by your definition? Or do you mean something else?

Do you perhaps mean that the digits of e are normal, in the sense that any block of n digits occurs equally as often as any other block? It's not known if e is normal, but is normality the characteristic you're interested in?
 
  • #18
SteveL27,

I am beginning to believe that "random number" is an oxymoron.

Numbers are defined by value, order and relation overall, while randomness violates those quantities together.

Random number sequences manifest near infinite uncertainty and zero probability.

There exist tests for non-random numbers, but not for random numbers.

No physical measurement has confirmed a random number.

The set of random numbers and the set of non-random numbers seem mutually exclusive. Is the set of reals their union?

__________

[Strike "exclusive," please.]

Some example outputs of my pseudorandom generator (I am assuming normality for irrationals):

3.162277660...

becomes .162

2.718281828...

becomes .71

A true random number generator is an impossibility, since it requires both non-determinism (randomness) and determinism (predictiveness).
 
Last edited:
  • #19
Loren Booda said:
SteveL27,

I am beginning to believe that "random number" is an oxymoron.

Numbers are defined by value, order and relation overall, while randomness violates those quantities together.

Random number sequences manifest near infinite uncertainty and zero probability.

There exist tests for non-random numbers, but not for random numbers.

No physical measurement has confirmed a random number.

The set of random numbers and the set of non-random numbers seem mutually exclusive. Is the set of reals their union?

__________

[Strike "exclusive," please.]

Some example outputs of my pseudorandom generator (I am assuming normality for irrationals):

3.162277660...

becomes .162

2.718281828...

becomes .71

A true random number generator is an impossibility, since it requires both non-determinism (randomness) and determinism (predictiveness).

It might help if you think about randomness in different orders of entropy. Entropy is probably the best and most useful way to quantify randomness: maximum entropy on all orders means that there is no advantage you have for having any past information at all since a maximum value on all orders (independent, first order conditional, second order conditional, etc) would mean that the analysis of all past values with respect to each other would not give you an advantage.

The best way to describe this is to have all of these distributions be uniform because this distribution is the one that maximizes entropy. If you do this for all possible conditional probabilities, then you will get a distribution that is purely random.

From this distribution you will get hints about the kinds of processes that you could construct.

If you want something random, but not purely random then you don't have to do anywhere near as much work, but if you want a process that is random the best way possible, you need to construct the above system and from there decide what kind of process would really emulate this distribution.
 
  • #20
chiro said:
It might help if you think about randomness in different orders of entropy. Entropy is probably the best and most useful way to quantify randomness: maximum entropy on all orders means that there is no advantage you have for having any past information at all since a maximum value on all orders (independent, first order conditional, second order conditional, etc) would mean that the analysis of all past values with respect to each other would not give you an advantage.

The best way to describe this is to have all of these distributions be uniform because this distribution is the one that maximizes entropy. If you do this for all possible conditional probabilities, then you will get a distribution that is purely random.

From this distribution you will get hints about the kinds of processes that you could construct.

If you want something random, but not purely random then you don't have to do anywhere near as much work, but if you want a process that is random the best way possible, you need to construct the above system and from there decide what kind of process would really emulate this distribution.

S=κln|Ω|

S=entropy

κ=Boltzmann's constant

ln=natural logarithm

Ω=number of states

__________


1. Does true randomness accompany a transfinite number of states?

2. Is information about states restricted by a finite speed of light?

3. Can multiple states interfere, e.g. achieve minimum entropy?
.
 
  • #21
Loren Booda said:
1. Does true randomness accompany a transfinite number of states?

2. Is information about states restricted by a finite speed of light?

3. Can multiple states interfere, e.g. achieve minimum entropy?
.

Could you please explain what you mean by transfinite? I get the feeling its a set-theoretic term but I don't want to make an erroneous judgment.

For number 2, I have to reiterate that the above talks about randomness with respect the distributions of the states that are related to each other and are independent of the actual process itself.

If you want to answer your question you need to consider further constraints that relate to a specific process. Again the above considers a general process with a particular property.

I am not a physicist, but what you need to do is specifically outline how a law or a relationship between variables modifies the properties of the distributional information of the various joint distributions which then modifies the entropy, and from that you can get an idea of the real measure of the random nature of the process.

You should note that the physical systems are not purely random in the sense I have described above, because there is a lot of deterministic features known that we utilize and exploit everyday for various purposes. If the world were absolute truly random and completely unpredictable, then the order that we observe and make use of every day would not be present.

For 3, I wish to say that minimization of entropy gives an indication of order while maximization of entropy gives us an indication of disorder. Physicists and natural scientists usually have a goal of finding order and this directly relates to entropy.

The other thing is that you don't just want to consider the entropy characteristics of the original distributions, but also those of possible transformations of the data and hence the possible transformations of the distributions that 'make sense'. Make sense depends on both mathematical ideas and domain ideas, but for mathematical ones you want to consider at the very minimum convergence and probably topology and differentiability as well.

If you want an example of entropy minimization, think of entanglement. Instead of having two objects that would be classified as purely random, instead what we see is a reduction of entropy from that case since one has a direct effect on the other and this result shows a form of order that would otherwise not be seen in a purely random system.

In fact it is this property of minimum entropy in a variety of circumstances that has allowed us to obtain formulas like the ones you find in your science textbooks: it is this ability to quantify this order accurately enough that allows us to even understand this system we call reality or the 'universe', and I imagine that as time goes by we will be able to find transformations of our distributional representations that allow us to see even more order and subsequently a way to quantify it, just as Newton quantified gravity.
 
  • #22
chiro said:
Could you please explain what you mean by transfinite? I get the feeling its a set-theoretic term but I don't want to make an erroneous judgment.

For number 2, I have to reiterate that the above talks about randomness with respect the distributions of the states that are related to each other and are independent of the actual process itself.

If you want to answer your question you need to consider further constraints that relate to a specific process. Again the above considers a general process with a particular property.

I am not a physicist, but what you need to do is specifically outline how a law or a relationship between variables modifies the properties of the distributional information of the various joint distributions which then modifies the entropy, and from that you can get an idea of the real measure of the random nature of the process.

You should note that the physical systems are not purely random in the sense I have described above, because there is a lot of deterministic features known that we utilize and exploit everyday for various purposes. If the world were absolute truly random and completely unpredictable, then the order that we observe and make use of every day would not be present.

For 3, I wish to say that minimization of entropy gives an indication of order while maximization of entropy gives us an indication of disorder. Physicists and natural scientists usually have a goal of finding order and this directly relates to entropy.

The other thing is that you don't just want to consider the entropy characteristics of the original distributions, but also those of possible transformations of the data and hence the possible transformations of the distributions that 'make sense'. Make sense depends on both mathematical ideas and domain ideas, but for mathematical ones you want to consider at the very minimum convergence and probably topology and differentiability as well.

If you want an example of entropy minimization, think of entanglement. Instead of having two objects that would be classified as purely random, instead what we see is a reduction of entropy from that case since one has a direct effect on the other and this result shows a form of order that would otherwise not be seen in a purely random system.

In fact it is this property of minimum entropy in a variety of circumstances that has allowed us to obtain formulas like the ones you find in your science textbooks: it is this ability to quantify this order accurately enough that allows us to even understand this system we call reality or the 'universe', and I imagine that as time goes by we will be able to find transformations of our distributional representations that allow us to see even more order and subsequently a way to quantify it, just as Newton quantified gravity.

For number 1, by "transfinite" I attempt to attain the absolute limit of states where the system is purely random.

For number 2, I should have asked whether pure randomness can occur in a finite universe.

The relations within the equation for statistical entropy I gave are standard, and only simplistic logarithmic information derives from them.

States approaching infinity on a microscopic level are just as likely on a macroscopic level.

For number 3, quantum entropy would either involve destructive interference with change to the negative, or would involve constructive interference with change to the positive.

I believe statistical mechanics gives classical transformations of our distributional representations.

For entropy to have a true "law," I guess that entanglements must transfer their statistics instantaneously (rather than at a finite speed of c) and thermally.
 
  • #23
Loren Booda said:
For number 1, by "transfinite" I attempt to attain the absolute limit of states where the system is purely random.

For number 2, I should have asked whether pure randomness can occur in a finite universe.

This is a very good question.

I think the answer is going to be yes because the state-space does not have to necessarily define the nature of the process.

The thing you have to remember is that a process can take in a finite-state space and map it to a finite-state space like say relating the history of die rolls to the next one in terms of probability.

The state-space itself is fixed, but the process could go on forever and ever infinitely and although you have a process with a finite state space, it doesn't mean that the properties of the underlying process itself are not purely random.

Think of the problem of whether you could define a coin-toss process so that every new toss has no arbitrage chance of being biased in any way given the entire history of the process. If you think you can define a process that is unpredictable in this nature, your answer is yes. If you think you can't your answer is no.

I haven't shown a proof, but if I were to give one I would try and show that there always exists a process that given N observations, that the complete joint distribution of the N+1 observation given all the prior observations has maximal entropy (i.e. all values are the same). If this is shown, then you have proven that your answer is a resounding yes.

The relations within the equation for statistical entropy I gave are standard, and only simplistic logarithmic information derives from them.

The thing about entropy though is that you need to consider not just non-conditional entropy but conditional entropy as well.

To give you an example, imagine that you have a process corresponding to an infinite periodic sequence where one period consists of {0,1,2,3,4,5} in that order and repeats forever.

Now if you try and calculate P(X = a) for a = {0,1,2,3,4,5} you will always get 1/6 which implies maximal entropy.

But for this process we know that P(X_n+1 = a| X_n = (a-1) MOD 6) = 1 because of the periodicity. The entropy of this joint distribution is zero which implies absolute determinism.

From this statement although the non-conditional entropy is maximal, the joint ones are completely the opposite and through this we have found complete order which is exhausted at this level and thus the process is deterministic.

The order of the process if it exists, will be hidden somewhere in the conditional joint distributions, not in the non-conditional probability distribution.

States approaching infinity on a microscopic level are just as likely on a macroscopic level.

You need to be cautious about this: it may turn out depending on what you define as macroscopic that there is a different kind of entropy than what you would find on a microscopic level and that you may get some kind of dimensionality reduction when you consider a particular macroscopic space vs a microscopic space. If this occurs (especially dimensionality reduction), you are going to get some kind of different in classification of states between the two spaces.

Again, it needs to be defined what the macroscopic space is and if possible, the mapping between states in different spaces. I think you'll find that because of the way things are seen macroscopically in comparison to microscopically, there will be some kind of significant reduction in the state space for the macroscopic classifications in contrast to the microscopic ones which will look more like a projection than a bijection.

For number 3, quantum entropy would either involve destructive interference with change to the negative, or would involve constructive interference with change to the positive.

I'm not sure what you mean by this statement.

I believe statistical mechanics gives classical transformations of our distributional representations.

The thing I am more interested in is not just the non-joint distribution but the joint distribution in a general case. As I said above, order is not found in non-conditional statistical measures or distributions: it's like taking data sorting it out and putting it into separate buckets without considering how the relativity between data impacts the order found in the process.

For entropy to have a true "law," I guess that entanglements must transfer their statistics instantaneously (rather than at a finite speed of c) and thermally.

This is a very interesting point that I am going to have think about. Thanks.
 
  • #24
chiro,

Please explain the general meaning of the symbols and variables used in P(X_n+1 = a| X_n = (a-1) MOD 6) = 1.

Thanks for your dedication.
 
  • #25
Loren Booda said:
chiro,

Please explain the general meaning of the symbols and variables used in P(X_n+1 = a| X_n = (a-1) MOD 6) = 1.

Thanks for your dedication.

This is just a way of describing 1st order conditional properties for the staircase process {0,1,2,3,4,5} that keeps repeating in a periodic way.

I'll expand it out for all states and then you should see how I used the mod function: Here we go:

P(X_(n+1) = 0 | X_(n) = 5) = 1
P(X_(n+1) = 1 | X_(n) = 0) = 1
P(X_(n+1) = 2 | X_(n) = 1) = 1
P(X_(n+1) = 3 | X_(n) = 2) = 1
P(X_(n+1) = 4 | X_(n) = 3) = 1
P(X_(n+1) = 5 | X_(n) = 4) = 1

All other probabilities for all other first order conditional combinations are zero and you can show this by various probability identities and exhaustion of the probability space.

The X_(n+1) refers to the "n+1"th observation for the process and the X_(n) refers to the "n"th observation for the process. You could for this example associate n as a time parameter of a one-dimensional process.

The above process can only take on the values {0,1,2,3,4,5} which means we only have to consider going from one value in this list to another value in this list.
 
  • #26
Gedankenexperiments
__________

Please consider whether each of the following pairs is relatively entropic:

1. The big bang singularity and its imminent nonsingularity
2. Electron self-energy at a point and a spatial perspective
3. A cosmologist observing his self-inclusive universe
4. A closed universe and the black holes within
5. A quantum measurement and its measuring device
6. A vacuum of virtual particles
7. Turbulence at temperature T→∞
8. Black bodies at temperature T
__________

How many conditional entropies would there be given N non-conditional entropies?
__________

S=∫ΔQ/T

The article at http://en.wikipedia.org/wiki/Negative_temperature#Examples reads:

""Since we started with over half the atoms in the spin-down state, initially this drives the system towards a 50/50 mixture, so the entropy is increasing, corresponding to a positive temperature. However, at some point more than half of the spins are in the spin-up position. In this case, adding additional energy reduces the entropy, since it moves the system further from a 50/50 mixture. This reduction in entropy with the addition of energy corresponds to a negative temperature."
 
  • #27
Loren Booda said:
How many conditional entropies would there be given N non-conditional entropies?

I will answer this part here and discuss my thoughts on the thought experiments later.

In terms of how many conditional entropies there are, this depends on the state-space.

Usually if we were to study the system classically, then we probably would have considered things in terms of spatial locality.

What I mean by this is that in a classical context, matter or whatever kind of components that make up what we call matter, energy or whatever would have been treated in the way that things that are more local spatially would have more of an effect on the physical properties of that matter and thus affect everything from temperature to everything else and thus affect its entropy. The easiest way to think about this in terms of the local analytic viewpoint is to consider modelling a dynamical system like a fluid. Although you get all kinds of chaotic effects, typically by modelling everything as a continuum where everything is modeled in a way where all the local effects add up to produce the final cglobal behaviour.

In fact anything modeled with standard calculus will use this idea that by knowing local changes (usually in the form of a derivative), then the global changes can be found by how local changes accumulate and this is why when systems can be modeled this way, why calculus is so useful because it gives us a framework for doing exactly this.

But now you have to consider the situation when you are analyzing things that are not spatially 'close' or local (in the context I mentioned above).

When this happens, we need to consider not only spatially local effects but things that are 'non-local'. And this is the kind of thing that needs to considered in quantum mechanics and a lot of experiments are working on trying to understand this very thing.

In a classical way of analysis (physically that is), this is not only completely foreign in terms of our intuitive understanding and experience, it is a lot harder to deal with mathematically.

In terms of your questions, I can only answer them without using physical constraints: in other words, I will try and attack these from a mathematical viewpoint and not from one which is considered by physicists and this might be a little dissappointing but it will still, in my opinion give you some more understanding.

The thing for all your problems is that as a general rule, order is found when entropy is minimized.

Now the thing is that what we call 'time' is only one kind of order. Depending on the system, there are most likely going to be 'many' kinds of orders. In classical physics, time itself has a very good order to it in such a way that the conditional entropies in this context are very highly minimized in a way that the models give us something that is highly predictable which is just a result of a very low conditional entropy in the context of the system with regard to various conditional measures.

Intuitively, with calculus, the way we order things is always in terms of locality with respect to some form of a variable that is usually temporal or spatial in nature (often a system that involves a mix of the two). In terms of temporal, it has grown slowly from observation, first from a macroscopic level and then to a microscopic level but the idea is the same: relate local changes in space and time to a process and use calculus to model some form of global behaviour of the physical world.

But in the context of a general high state-space, highly complex general process, it may have many different kinds of orders and one order will typically hide a lot of information about the system in general in such a way that although the mathematical conclusions are correct, the interpretation may be very limited and in some ways detract from an otherwise higher understanding.

There is no problem with finding orders, but it needs to be considered that there might be other orders either from the raw system itself or from a transformed variant that gives an insight that can not be seen from the existing order that has been either chosen or subsequently discovered.

I will have to look up some of these things specifically later on to see what they correspond to mathematically, but the key issue in the above is to first define the states and then slowly describe the conditional distributions that are derived mostly from how these states interact.

You may find that interactions are constrained between specific parts of the system in the same kind of manner that you get local interactions spatially in terms of classical physics, but the thing is that the constraints are not-spatially local or even temporally (in the way that we see it) local and in this context you need to use a different way of analyzing the system.

Mathematically the way to describe the conditional entropies would be firstly to define the collection of all possible conditional probability distributions and then define an entropy for that distribution. You could also define things like relative entropies as well and all of this can be found in an information theory book, and I recommend Thomas and Cover's Information Theory 2nd edition book which you can buy on Amazon if you are really interested (also look on Wikipedia if you just want definitions and not something as formal).

What you'll find in highly ordered systems (no matter what the order) is that the majority of joint distributions have entropies of zero (or close enough to zero) that the order is easily determinable. If this is not the case, then it takes a lot more work (and subsequently 'appears' random).

I will look at the article in a little while: for now I hope the above has helped you.
 
Last edited:
  • #28
Loren Booda said:
Gedankenexperiments

S=∫ΔQ/T

The article at http://en.wikipedia.org/wiki/Negative_temperature#Examples reads:

""Since we started with over half the atoms in the spin-down state, initially this drives the system towards a 50/50 mixture, so the entropy is increasing, corresponding to a positive temperature. However, at some point more than half of the spins are in the spin-up position. In this case, adding additional energy reduces the entropy, since it moves the system further from a 50/50 mixture. This reduction in entropy with the addition of energy corresponds to a negative temperature."

This is the very thing underlying the whole motivation behind thermodynamics since we always assume entropy to either stay the same or increase for a particular kind of system, and we associate the study of energy 'in general' to having this sort of property.

It should be noted that this framework of physics grew out of the exercise of studying 'heat' and unsurprisingly many of the ways that we generate energy (that is later converted to electrical energy) is to this very day, still generated by heat. With coal we generate heat and that goes to energy, with nuclear we do the same thing and convert that to energy. The same thing for petroleum based forms of energy as well. It's all based on heating stuff up and converting it to energy.

There are exceptions for example using say hydrodynamic power stations, or wind power amongst other things but for most of the energy generation we just create enough heat so that we can create steam and drive a turbine which in my mind is absolutely ridiculuous but that's the way it is.

The result of creating more 'order' out of a system has been done in the lab (specifically look for experiments done previously at the Australian National University and there are probably others) but not to the extent where we would do it so much on a 'large scale'.

I'll have a look at the actual definitions of how they define heat in detail later on if you like, but it is true that a decrease in the entropy characteristics with how 'heat' is defined in terms of that particular entropy measure (which is going to be implicitly defined by the physical model which I need to read) will result in a decrease of the associated quantity as it's defined.
 
  • #29
Loren Booda said:
Please consider whether each of the following pairs is relatively entropic:

1. The big bang singularity and its imminent nonsingularity

Now I want a disclaimer: this is my own opinion. If it helps you then more power to you, but keep in mind that it's just an opinion that will based on a written/spoken argument that relates to what I've said previously in this thread and is not strictly mathematical in a formal sense.

For 1, this is a very good question. Here are my thoughts on this:

Many people have advocated that the 2nd law of thermodynamics should hold in the context of describing the physical universe with respect to the order that we call 'time' in that entropy should always be increasing if not at least staying the same.

To me I would say that this is only 'half-right' and in some ways misleading because an ever increasing entropy for a system means that the system gets 'more chaotic' if this happens for every form of entropy.

People talk about plates breaking, experiments with heat and other things that show a good argument for the entropy increase scenario and non-surprisingly time itself is defined by the 2nd law of thermodynamics (it's one way, but it's a very important definition in physics).

But if you consider all the different kinds of entropies that exist, I see evidence that the above is clearly not true. We have a lot of order in terms of some known approximations in physics and other scientific systems. Look around and just see the order that exists on our planet in terms of life-forms behaving with one another and in terms of any phenomena that has a high amount of stability with respect to its environment. In other words, some things in some contexts are producing situations where things become 'more-ordered' rather than 'more disordered'.

This leads me to infer that the 2nd law of thermodynamics applied across the board to represent the entire universe is faulty in its reasoning because if this was the case the universe would be in every respect, in a complete and utter state of chaos and this is not the case.

So with respect to entropies again I have to state that in a complex system there are going to be many different kinds of orders and I imagine when comparing and contrasting the different entropies of initial big bang and other states that the same argument needs to be applied. We currently do not have many different orders and when we start to get more insight and hence more different orders, we will start to explore this idea more clearly and more deeply.

2. Electron self-energy at a point and a spatial perspective

I will have to read up on this.

3. A cosmologist observing his self-inclusive universe

Again this has to do with the order that they are trying to apply. A cosmologist has a very different set of orders that they are considering with respect to a physicist that is studying macroscopic things at probably the level of the atmosphere at the high end or even with respect to a physicist that is studying how atoms behave in a very controlled environment.

To answer your question, you have to specify the kinds of orders used and because of this I can't give a decent answer to your question because it's too broad.

4. A closed universe and the black holes within

This is a very interesting question.

For this to be answered we would need to know how information is exchanged between things inside a black hole (in the event horizon) and things beyond the horizon.

If it turns out that information is exchanged (I think this is in debate currently) then that will make a huge difference with how we form constraints for the joint distributions and entropies and it also means we have to consider a system that is much much larger and more complex.

If things are completely isolated, then this simplifies things dramatically but again with Hawking idea of evaporation from black-holes I have a feeling that if the theory is correct or even if the idea is correct in terms of some form of radiation, then this means that essentially there is 'communication' (information exchange) going on and this needs to be taken into account.

Also if there is kind of entanglement that is not spatio-temporally local (i.e. action at a distance in the context of between two different space-time boundaries) then this would make it even more broader.

5. A quantum measurement and its measuring device

Ahh the measurement problem.

In terms of the measurement problem, again this is going to relate to any analysis of the joint distributions with respect to anything that is associated with the device.

It doesn't make our problem any easier because we will need to consider orders that are much much harder to extrapolate from the properties of our system than we currently do now, but again the idea of finding orders is the same except we are considering it in a different context.

You need to note that you will need to look at different orders other than the standard ones mentioned if they indeed do exist.

One thing I will mention though is that if there is some kind of arbitrage mechanism that exists to keep things stable then this could be used to formulate the properties of the various distributions and test it experimentally. I am not a physicist though.

The idea behind arbitrage in the way I am describing is that the system would have to account in whatever way it can so that there is not enough determinism in the system to produce a particular point of instability. If the system was weak enough so that it could be exploited to create instability that was detrimental to the function of the system itself, then this would cause a kind of 'system-wide turbulence' that would be utterly destructive.

It's my opinion, but it's based on the idea of creating a system that doesn't essentially 'blow up inadvertendly'.

6. A vacuum of virtual particles

I don't understand enough about Quantum Theory to even give something even remotely useful for this particular question.

7. Turbulence at temperature T→∞

The thing that is missing for this question is the definition for the order known as 'turbulence'. I don't know of a single definition that is specific enough to define this. You can't analyze something you can not describe adequately. If you give me something specific enough, I'll try to answer your question.

8. Black bodies at temperature T

I am not a physicist, so I would need a bit more clarification about what you are asking.
 
Last edited:
  • #30
Blackbodies at temperature T -- http://en.wikipedia.org/wiki/Black_body A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence.

A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic radiation called black-body radiation. The radiation is emitted according to Planck's law, meaning that it has a spectrum that is determined by the temperature alone, not by the body's shape or composition.

A black body in thermal equilibrium has two notable properties:

It is an ideal emitter: it emits as much or more energy at every frequency than any other body at the same temperature.
It is a diffuse emitter: the energy is radiated isotropically, independent of direction.

An approximate realization of a black body is a hole in the wall of a large enclosure. Any light entering the hole is reflected indefinitely or absorbed inside and is unlikely to re-emerge, making the hole a nearly perfect absorber. The radiation confined in such an enclosure may or may not be in thermal equilibrium, depending upon the nature of the walls and the other contents of the enclosure.

A vacuum of virtual particles -- http://en.wikipedia.org/wiki/Vacuum_state In quantum field theory, the vacuum state (also called the vacuum) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used as a synonym for the vacuum state of an individual quantized field.

According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space", and again: "it is a mistake to think of any physical vacuum as some absolutely empty void." According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.

The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles".

It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle:

Δ E Δ t ≥ h-bar ,

(with ΔE and Δt energy and time variations, and h-bar the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.

Anthropic entropic principle -- I hypothesize that, rather than observers (life) be where entropy density is high, they exist where entropy density is low. The act of observation itself could rely on semicoherent radiative interaction, and so tend participants toward lower entropy density.

A cosmologist observing his self-inclusive universe -- I believe this could be modeled by your staircase algorithm, chiro, where observation cycles to the event horizon and back, and as speculated by early relativists, observers could see themselves gravitationally imaged about the circumference.

Anentropy -- I think that entropy depends not only on the states of a configuration, but also on the network of interconnections (entanglement) between states." Anentropic" by nature of retrospection, this latter "pattern memory" potentially surpasses entropy's information exponentially in magnitude.

Reciprocity of entropy -- In practice, the inequality in the second "law" of thermodynamics may be the crux of the argument against it being a true law. This law may be violated for a nonisolated system. But might it also not hold in general time-like spacetime?
 
  • #31
I'm going to give my thoughts on a topic by topic basis since there is a lot in this post. Again, these are just my opinions and I welcome any feedback you may have whether it's mathematical or just in the non-technical spoken manner which I will prefer to use in these posts.

Loren Booda said:
Blackbodies at temperature T -- http://en.wikipedia.org/wiki/Black_body A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence.

A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic radiation called black-body radiation. The radiation is emitted according to Planck's law, meaning that it has a spectrum that is determined by the temperature alone, not by the body's shape or composition.

A black body in thermal equilibrium has two notable properties:

It is an ideal emitter: it emits as much or more energy at every frequency than any other body at the same temperature.
It is a diffuse emitter: the energy is radiated isotropically, independent of direction.

An approximate realization of a black body is a hole in the wall of a large enclosure. Any light entering the hole is reflected indefinitely or absorbed inside and is unlikely to re-emerge, making the hole a nearly perfect absorber. The radiation confined in such an enclosure may or may not be in thermal equilibrium, depending upon the nature of the walls and the other contents of the enclosure.

I just want to talk about something before I get into the main response:

So far it seems that the current idea is that every known force has a force mechanism that is represented by particles that are in standard model and that some people are still looking for similar mechanism for gravity which they call a 'graviton'. In other words for forces to act there is a physical exchange of these 'carriers' with other particles that initiate a force and thus change the properties of a physical system or a particle.

Now with regard to some kind of localness, this makes intuitive sense because in terms of analyzing physical changes (which include subsequent changes in physical states which quantify energy characteristics) because spatio-temporally, at least in terms of local changes because it gets rid of the thing that Einstein referred to as 'spooky action at a distance' which is something that is hard if it did exist for most scientists to grasp since the world is viewed in terms of local spatio-temporal changes in the way that we use derivatives in calculus to represent local properties of a function.

I've diverted a bit from the question so I'll get back on track, but I stress that is important to consider that if anything has a hint or just plain and simply is non-local then this means new analyses are needed. I have said it above but I think it's important to reiterate.

Now let's think about this in terms of entropy for the black-body.

We know that entropy relies on not only the nature (shape) of the distribution itself, but also the number of states and I wish to talk about this now.

If the number of states is indeed finite, then any associated relative entropy of that system will also be finite. The question then remains, how do we identify the states if they are finite?

The evaluation of the states is something that is probably the most important part of understanding physical laws because not only does it give predictive power, but it also allows a better understanding.

The methods that are currently used include different forms of quantization. The quantization schemes differ from theory to theory, but the idea is the same: there are not going to be an uncountable number of states within some finite representation.

We might for example take the idea to quantize space-time in a variety of ways and this is something that is being worked on. The quantization might say that for example that all physical elements can only occupy certain states individually like a lattice. Another theory might argue that only specific 'combinations' can exist for something to be called a state. This would be analogous to phenomena found in the Standard Model with say the requirement for quark configurations in various atomic particles. It also might be even more complex where again where it is a non-local and more complex version of the quark phenomenon.

The point of the above is that once we can show one way or another that for some finite region (might be everything contained within a space-time boundary or even a subset) always has a bounded entropy for all relative joint distributions, then you know that there is a quantization of states and that the relative entropies will give 'hints' about what the quantization scheme actually is depending on the nature of the conditional distributions and the complex of those distributions.

So with the above said, even for something like a black-body that has those properties, if there really exists a proper quantization within some finite region of some sort, then any kind of entropy in this space will always be bounded even for something like a black-body.

Remember I'm talking about the state-space of the system.
 
  • #32
Loren Booda said:
Anthropic entropic principle -- I hypothesize that, rather than observers (life) be where entropy density is high, they exist where entropy density is low. The act of observation itself could rely on semicoherent radiative interaction, and so tend participants toward lower entropy density.

I think this question has more subtleties than you might realize.

The thing about observation is that it is not an isolated incident even when you only consider observations of one particular instrument.

The thing about observation is that it is not a single-event phenomena but it is a multi-event phenomena. Observations are not isolated: they rely on other observations as well.

If you expected for something with respect to a set of observations to get more ordered with respect to some ordered set of observations, then the entropy would in this context decrease with respect to this particular sequence of observations. Mathematically if our ordered set of observations was {S1,S2,S3,...} = S then P(S|U) represents the distribution and we would expect a resultant entropy decrease for a respective measure of order in this context.

I know this might seem like a copout, but correlation does not imply causation. Intuitively though, it would seem that entities would have some kind of impetus to minimize various conditional entropy measures as to create order rather than attempt to increase entropy to create more chaos.
 
  • #33
Loren Booda said:
Reciprocity of entropy -- In practice, the inequality in the second "law" of thermodynamics may be the crux of the argument against it being a true law. This law may be violated for a nonisolated system. But might it also not hold in general time-like spacetime?

There has been at least one experiment that I know of (at ANU) that has shown that this doesn't hold as a law set in stone (only for a fractional amount of time, but again it proves the point).

The response I have for this is the same as what I said above: if all measures of entropy where increasing then we would expect systems to get more chaotic and not less chaotic. I'm not saying that different entropy measures will always violate the 2nd law, but what I'm saying is that the idea of continually increasing chaos is not what we experience.

We can talk about plates breaking and all these kinds of things that support it, but again there is a huge amount of order in our universe in so many ways and this tells me that not everything gets more chaotic and some things get a hell of a lot more ordered.

Following this thread I'm inclined to go review current theories and mathematical constraints for the various theories later on, but for now I can say that as a whole, I do not know at this time enough about the constraints to give a qualitative and specific answer.
 
  • #34
Loren Booda said:
Anentropy -- I think that entropy depends not only on the states of a configuration, but also on the network of interconnections (entanglement) between states." Anentropic" by nature of retrospection, this latter "pattern memory" potentially surpasses entropy's information exponentially in magnitude.

This is precisely what conditional distributions describe. In fact patterns are simply a way of taking some kind of transformation of the state-space and in which the entropy is minimized. If the entropy of that transformed state-space conditional distribution is zero, then that is a 'pattern'.
 
  • #35
Loren Booda said:
A vacuum of virtual particles -- http://en.wikipedia.org/wiki/Vacuum_state In quantum field theory, the vacuum state (also called the vacuum) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used as a synonym for the vacuum state of an individual quantized field.

According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space", and again: "it is a mistake to think of any physical vacuum as some absolutely empty void." According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.

The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles".

It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle:

Δ E Δ t ≥ h-bar ,

(with ΔE and Δt energy and time variations, and h-bar the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.

Just as a general observations, I really couldn't imagine a situation where you would be able to completely nullify energy of a system, no matter what the magnitude or what the region of space.

Just as a thought experiment imagine if you could in some region, nullify the energy for that region. This would mean that for this region if this was the case everything would be completely static and there would be no possibility for any kind of dynamic behaviour.

With regard to virtual particles being used to 'borrow' energy, again in terms of state-space I would consider this as part of the system and not something that is isolated from it. The fact that it exists or at least the mechanism in some form exists means that it should be included in whatever way that is appropriate.

The big thing at least in my mind is this: how does the quantization of energy (as given by E = hf) relate to the quantization of the medium used to represent it? Moreover, how does all of this affect the supremum of the entropy measures in some finite space that I was talking about earlier?

Personally I think the nature and quantities related with the lowest states tell us a lot about the nature of the system. I'm going to have to at some point take a closer look at these kinds of things: you've got me interested now damnit!
 
  • #36
Loren Booda said:
A cosmologist observing his self-inclusive universe -- I believe this could be modeled by your staircase algorithm, chiro, where observation cycles to the event horizon and back, and as speculated by early relativists, observers could see themselves gravitationally imaged about the circumference.

The staircase algorithm was just an example to show how you could use entropy measures to deduce an order or pattern of some sort and this was always the intention.

I've heard about the nature of cyclic structures in physics like cyclic time and so on, but I can't really comment on the specifics.
 
  • #37
Cosmological entropy -- The blackbody spectrum is accurate to the finite number of radiating bodies which compose it. Heat exchange toward equilibrium moves the measured cosmic background radiation emissions to the perfect thermal curve, driving an increase in surrounding order. This balance avoids the "heat death" of the universe by limiting the blackbody radiation to countable radiators -- i.e. spacetime never realizes a maximally symmetric, boundless and randomized state approaching infinite entropy, but one which exhibits gains of statistical anentropy.

Microscopic entropy -- Vacuum mass-energy is paradoxically empowered by the action of observations - from the Copenhagen interpretation, I believe. (Without observers, would Maxwell's Demon work?) The likelihood of population by virtual quanta increases with more constant entropy density, assured by a random thermal distribution. Entropy density bounds are determined by their divergence there from the blackbody spectrum as ω/2∏ approaches 0, or ∞. You brought up that quantum energy being zero in space-time does not intuit comfortably, as I put it:

ΔE: E≠0.

Is this reminiscent of quantum number rules?
 
  • #38
One thing I want to comment on in general so it may help you understand why I am even spending many posts talking about this topic.

Scientists study nature in the hope that they understand something at whatever level which at a minimum usually relates to figuring out how 'something works'.

If a scientist figures out some particular thing, they have found an order in that context. The larger the pattern applies to, the larger the order. It might be a small order like figuring out a particular cell or virus always acts the same way or it might be a large order like describing the general conditions or approximate conditions that gravity or electromagnetism follows. Both examples are types of orders but the more general one applies to a state-space much more broadly than the prior one.

When you keep this in mind, it becomes a lot more obvious that statistical methods are necessary because they can see things that any kind of local deterministic analysis would not and in fact unsurprisingly in many contexts, they do just this when you look at how these things are applied in data mining applications. Also you'll find that these kinds of statistical techniques and analysis are found when you have to analyze data from say the Large Hadron Collider or some other really highly data-intense scenario like you would find in astrophysics or even in miltary applications (think of all the people who use the really powerful supercomputers and then find out why they use them).

If you lose sight of this above aspect, then you have only constrained yourself to representations that give a very narrow local viewpoint, although albeit still a very important one but if you can not free your mind from this mental prison then you will be missing out on all the other things out there and not connecting all of the other isolated orders that have been discovered (like all the the other physical formulas and so on) and treat them largely as separate instead of as connected.
 
  • #39
Loren Booda said:
Cosmological entropy -- The blackbody spectrum is accurate to the finite number of radiating bodies which compose it. Heat exchange toward equilibrium moves the measured cosmic background radiation emissions to the perfect thermal curve, driving an increase in surrounding order. This balance avoids the "heat death" of the universe by limiting the blackbody radiation to countable radiators -- i.e. spacetime never realizes a maximally symmetric, boundless and randomized state approaching infinite entropy, but one which exhibits gains of statistical anentropy.

Microscopic entropy -- Vacuum mass-energy is paradoxically empowered by the action of observations - from the Copenhagen interpretation, I believe. (Without observers, would Maxwell's Demon work?) The likelihood of population by virtual quanta increases with more constant entropy density, assured by a random thermal distribution. Entropy density bounds are determined by their divergence there from the blackbody spectrum as ω/2∏ approaches 0, or ∞. You brought up that quantum energy being zero in space-time does not intuit comfortably, as I put it:

ΔE: E≠0.

Is this reminiscent of quantum number rules?

Taking a look at this:

http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch6/quantum.html#quantum

It seems to indicate that the energy of the electron requires x amount of energy in accordance with some mathematical constraints to go between orbitals and that it can never go below the orbital corresponding to n = 1. I have to point out that I am someone with mathematical training who has not studied physics specifically enough to put a lot of this into context in a specific way.

It seems that from this fact there is indeed a minimum energy level that is non-zero and in the context of what you are saying I am inclined to agree if this quantum number accurately reflects the attribute of the magnitude of the energy present.

For the cosmological part, again I am going to base my agreement on the line that if there were infinite entropy then there would be absolute chaos. Chaos in this kind of context is not good for anything especially life because for many things to function, orders of different kinds must be present. Imagine for example if gravity was just an ad hoc thing that decided when to be +9.8m/s^2 or when it decided to be -1000m/s^2. Think about what this would do to life: IMO it wouldn't exist.

There is also an argument by physicists that say that if the constant G were outside of a very narrow range then life including us would cease to exist. I don't know if its true or not but the kind of argument has a lot of important implications for science because what it does is brings up the issue of how order is so important for possibly not only us humans to exist or even all plant and animal life to exist, but possibly even for the universe as we see it to even exist.

It's not to say however that there do not exist finite subsystems with maximal or close to maximal energy for that subsystem. High levels of entropy in given situations are important IMO because a high level of entropy induces disorder which in a statistical sense equates to non-determinism or randomness. That element of randomness allows us to have the antithesis of what we could consider a 'Newtonian Universe' where a universal clock and absolute rules dictated the complete evolution of the system. If this were the case then we would be able to exhaust every possibility down to some conditional order and we would get a minimal entropy characteristic for the system just like the stair-case example I posted earlier but maybe in a more complicated manner.

So again the reason why I agree with you about having bounded entropy as a general property for all possible conditional distributions but for still having appropriate situations where entropy is maximal with respect to some sub-space is that it allows for things to still work (like life) but it also allows the case where there is 'real' evolution and for lack of a better word 'choice' at any kind of level for any scale given appropriate constraints (which we are in the process as human beings, trying to figure out).

If the above doesn't make sense to you, imagine the broken plate scenario happening with gravity, electromagnetism, the strong force, and even something more macroscopic like biological interactions. Imagine for an instant that people were splitting in half randomly and people's heads were dissappearing into outer space and then back again like a game of russian roulette. Imagine you picking up a gun unloading the chamber and then you fire the pistol and a bullet comes out.

To me, this is the above reason why there are constraints and understanding what these constraints are will probably give us humongous hints about why we even exist.
 
  • #40
Parallel Universes, Max Tegmark -- http://space.mit.edu/home/tegmark/PDF/multiverse_sciam.pdf. What is not physically possible in an infinite universe? Can a finite universe have infinite possibilities? Do universal event horizons repeat without bound?

Are observers physically immortal?

A truly unified theory might transform the existing order in maximal ways, including entropy/anentropy reversal.

Thermal disequilibrium moves toward equilibrium by absorbed or emitted correspondent photons, with a decrease in entropy.

What is the most ordered universal structure possible? Is an empty universe interpretable as having both maximum and minimum entropy density? Can a maximally entropic universe have the same "complexity" as one of minimum entropy? Does an observer always impose order upon a more random universe? Can two or more disordered universes interfering together (e.g. through branes) reduce entropy overall?

Entropy, being scale dependent, sees an object like the Moon as being more ordered on many levels relative to the Earth.

Probability zero regions, found near atomic orbitals, are located in singular spacetime structures but quantum mechanically can be considered P>1, as they can not accommodate finite particles.

The cosmic background radiation -- containing the microwave background radiation -- includes photons, gravitons, WIMPS (like neutrinos) and perhaps Higgs particles which impinge anentropically (focused) from the event horizon upon an observer. The accelerating cosmos, with possible inflation, linear expansion, and dark energy provide an outward entropic divergence of energy.
 
  • #41
Loren Booda said:
Are observers physically immortal?

This is an interesting question.

Frank Tipler has written a book trying to flesh out ideas about the physics of immortality. Just in case you are wondering, he has written pretty extensively about topics involved in General Relativity and even to some extent Time Travel with respect to space-times that allow theoretical paths to time travel.

But if I wanted to give a specific question for this, I would be asking this important question: what energy is involved for consciousness, what kind is it, where is it stored (in some kind of field for example) and how can it be transformed?

In my view, answering those questions will give a specific way to start thinking about this question in depth from a viewpoint that I think both scientific communities and religious communities can both appreciate and agree on as a basis for exploring this topic further.

Personally (IMO disclaimer), I think that there is some kind of other field that is not part of the known fields like EM, the nuclear forces and gravity that contains something that compromises of what we call 'consciousness'.

I am not saying that things like EM and the other forces don't play a role in how we behave, what we think, and so on, but I don't think that it is the whole story.

With the above aside in terms of immortality, if the energy that makes up consciousness can not be destroyed, and also can not be transformed away to something that loses or wipes information about conscious awareness then I would say that yes physical observers are indeed immortal on that argument.

But in order to argue the above you have to first define what consciousness actually is in terms of energy and also what kinds of energy forms they actually are and unfortunately I have a feeling it's going to take a while to even get close to even defining the specifics of this, let alone doing an experiment or having discussions about the veracity of whether the claim is wrong, right, or somewhere in between.

Parallel Universes, Max Tegmark -- http://space.mit.edu/home/tegmark/PDF/multiverse_sciam.pdf. What is not physically possible in an infinite universe? Can a finite universe have infinite possibilities? Do universal event horizons repeat without bound?

In terms of the infinite possibilities question, again this comes down to the discussion we had before about whether you can always construct a joint distribution that has random entropy for all conditional distributions for 'prior' events. In other words the entropy of each possible conditional distribution has maximal entropy. If this is always the case, then you should have infinite possibilities.

Also remember that the above is framed in terms of a finite state space. Think about it like constructing a process where no matter how you construct any conditional distribution for the next roll given every permutation of the previous rolls, all distributions will have maximal entropy. This means that you can construct a completely random system. If you can't do that but can do something in between minimal and maximal entropy then it is semi-random. If you can only construct a zero entropy distribution, then it means your system has become deterministic.

For the infinite universe question (what is not possible in an infinite universe), this will have to do with not only physical arguments but with philosophical arguments as well.

You see the reason that plates just don't assemble themselves from broken states and that gravity acts in a uniform way and even that quantum behaviour and all other physical interaction mechanisms work the way they work says to me at least that there is a reason why you can't just do 'anything you want', at least not currently.

Again my thought experiment would be to consider if people just randomly dematerialized and gravity just decided when it wanted to 'work' and 'not work' and the kind of chaos that would create for life in general. This tells me that there is a reason for the constraints at least in the context that you want an environment that supports and promotes the situation for living organisms in any form.

In terms of possibilities, this can be formed if you have a clearer idea of the nature of the different joint distributions. The big caveat though is that we don't have these yet. Science is very young for earthlings in the current state it is in and the amount of data we have and also the tools to effectively analyze it are not mature enough to really make all of these connections.

It's not just actually having the data: it's also having the computational hardware and technology, the algorithms, the mathematical techniques, and all of this to actually do all of this. These areas are evolving quite rapidly, but it's going to be a little while at least before it gets to a stage where we can give a more specific quantifiable answer using the above to answer 'what's really possible'.

For now we have to rely on experimental results, theoretical ideas and discussions, and the inquisition of scientists to help push this boundary and thankfully this is happening on a scale that probably never would have been imagined even a hundred years ago.

A truly unified theory might transform the existing order in maximal ways, including entropy/anentropy reversal.

The ironic thing about humans is that we crave certainty.

While I don't think this is necessarily a bad thing, the effect that it can have is that in a scientific perspective, we want as much certainty as possible both in its predictive power and subsequently in the mathematical representations that are used to both describe and predict things.

Quantum mechanics has come along and destroyed this notion and I think it's a thing that we should embrace at least in the idea that at some level, things will not be able to be predicted.

Here is one idea I have about why this kind of thing is good.

Consider that you have the complete set of laws that allow you to take the state of the complete system and engineer it in such a way that you can create whatever state you want at a future point of time.

Now consider what the above would do to the stability of the system. This situation creates situations where the stability of the system itself can be for lack of a better word, destroyed.

If situations exist like this, then what this would mean is that you would get all these possibilities where you would get these situations where things just literally blow up and create a situation where the evolution of a system is essentially jeopardized.

In a situation where this doesn't happen, you would need some kind of non-zero minimal entropy for all conditional permutations to avoid this very scenario which means you need to resort to a statistical theory of reality and not a deterministic one.

A situation where levels of stability in different contexts are 'gauranteed' or at least probabilistically high enough to warrant enough confidence would result in a kind of collective design so that this kind of thing would either not happen, or at least happen with a tiny enough probability so that it can be managed.

In fact if things had some kind of entanglement, then this mechanism could be used to ensure some kind of stability of the entire system and localize instabilities of the system if they do occur as to ensure that the system as a whole doesn't for lack of a better word 'blow up'.

The real question then if the above has any merit, is to figure out how you balance this kind of stability with the system both locally and globally having the ability to evolve itself in a way that is fair?

Thermal disequilibrium moves toward equilibrium by absorbed or emitted correspondent photons, with a decrease in entropy.

I don't know the specifics, but in the context of what I've been saying in this thread it would not be good for system stability to move towards a state of either maximal entropy or complete minimal entropy for reasons discussed above.

What is the most ordered universal structure possible? Is an empty universe interpretable as having both maximum and minimum entropy density? Can a maximally entropic universe have the same "complexity" as one of minimum entropy? Does an observer always impose order upon a more random universe? Can two or more disordered universes interfering together (e.g. through branes) reduce entropy overall?

To me, the situation where you have the most ordered universe is where all conscious forms work together in a way that doesn't create instability.

Some might see this as a religious theme or some kind of 'new age' comment, but an ordered system would look more like something that works in unison for each and every element rather than having elements working against one another.

If I had to characterize it, I would characterize it as every conscious form working with another to create the scenario where everything would be supplementing everything else in a way that creates a system where the energy ends up being directed in a way that everything works together as a whole which results in a kind of unification of all conscious beings which means that everything becomes a unified system which in terms of information means that it can be described as such which results in a decrease of entropy.

Remember entropy in this context is synonymous with not only order but also with the amount of information to describe something.

Remember that if you have a collective system that reaches some set of unified goals or constraints, then instead of having all these separate set of constraints to describe something, you end up having a situation where they end up merging which will result in requiring less information to describe the system. This lessening in the amount of information to describe the system translates in a reduction of entropy including the overall measures for all conditional entropies.

To me, the observer has the choice to either decrease or increase the entropies that end up contributing to the system as a whole but I would estimate that for a collective system to evolve in a positive manner, you would always want a system to at the very least decrease it's entropy over its evolution within any sub-region and collectively to find some kind of order for the system as a whole that reduces it's entropy from a previous state.

In terms of what that actual order is, I can't say but I imagine that there are many different kinds orders that could be formed just like there are many different functions that can be described once you have a dictionary and language structure that is minimal enough to describe a complicated system in a minimal form.

If this sounds like BS or foreign you should note that these ideas are a huge part of information theory including the area known as algorithmic information theory. If you want more information about this you should look up Kolmogorov complexity: it's not something that has been clarified in terms of algorithmic methods but the idea has been clarified to some respect.

Entropy, being scale dependent, sees an object like the Moon as being more ordered on many levels relative to the Earth.

A very good observation.

The thing is however, you need to define the order being used and this is really the heart of what makes language interesting.

The nature of the order could be to do with geometry and color variation. Describing a filled circle with a color spectrum that has little variation in one language is ordered.

But in another language it is not ordered. In another language something like the Mandelbrot set is highly ordered, but describing the moon in that language is highly disordered and requires a tonne of information.

This is why we have so many languages, jargon, structures, codings and so on. They all have a purpose in a given context. One language will represent something with minimal order but when you convert it to something else, it would take a ridiculuous amount of information to represent that same thing.

The question then becomes, how do we create languages in the best way possible? This is not an easy question and it is something that we are doing both consciously and unconsciously every single day.

The ultimate thing is that there are many different orders and not just one which makes it very interesting because we as scientists want to find 'the universal order' but my guess is that there are many orders that are just as valid as any other at the scope that they are presented at (i.e. the actual state space that these orders correspond to: think in terms of cardinality of the set).

Probability zero regions, found near atomic orbitals, are located in singular spacetime structures but quantum mechanically can be considered P>1, as they can not accommodate finite particles.

I don't know what this means, can you give me a link to a page that describes this?

The cosmic background radiation -- containing the microwave background radiation -- includes photons, gravitons, WIMPS (like neutrinos) and perhaps Higgs particles which impinge anentropically (focused) from the event horizon upon an observer. The accelerating cosmos, with possible inflation, linear expansion, and dark energy provide an outward entropic divergence of energy.

Can you point somewhere where this is described mathematically (and possibly in summary in english)? I'm for most purposes a mathematician/statistician and not a physicist.
 
  • #42
By the way I haven't read the article for multiverses so I'll read that shortly.
 
  • #43
The (quantum) wavefunction condition ψ(x)=0 holds continuously only when it is everywhere continuous.

Hypothesis: at a given x, the probability P(x)=ψ*ψ (assumed continuous and smooth) of locating a singular particle is assumed zero at the singular point ψ(x)=0. So ψmin(x0)=0 implies (dψ/dx)min(x0)=0, unless ψ=0 for all x.

__________

If ψmin(x)=A(exp(2∏i(xp/h)))(x=0)=A(cos(2∏(xp/h))+isin(2∏(xp/h)))(x=0)=0

Eigenvalues: x=(N+1/2)h/2p

and (dψ/dx)min=-2∏(p/h) A(-sin(2∏i(xp/h)))+icosA(exp(2∏i(xp/h)))=0

Eigenvalues: x=N(N+1/2)(h/2p)2

__________

P=probability=ψ*ψ

x=spatial dimension

A=constant

N=integer

h=Planck's constant

p=momentum

Conclusion: if ψmin(x0)=0, its first derivative derives a singular, local maximum or minimum there, but its neighboring points do not, unless ψ(x)=0 for all x.
 
Last edited:
  • #44
Geez Loren Booda, you'll really stretching me! I love it! :) I'll give an answer shortly.
 
  • #45
Loren Booda said:
The (quantum) wavefunction condition ψ(x)=0 holds continuously only when it is everywhere continuous.

Hypothesis: at a given x, the probability P(x)=ψ*ψ (assumed continuous and smooth) of locating a singular particle is assumed zero at the singular point ψ(x)=0. So ψmin(x0)=0 implies (dψ/dx)min(x0)=0, unless ψ=0 for all x.

__________

If ψmin(x)=A(exp(2∏i(xp/h)))(x=0)=A(cos(2∏(xp/h))+isin(2∏(xp/h)))(x=0)=0

Eigenvalues: x=(N+1/2)h/2p

and (dψ/dx)min=-2∏(p/h) A(-sin(2∏i(xp/h)))+icosA(exp(2∏i(xp/h)))=0

Eigenvalues: x=N(N+1/2)(h/2p)2

__________

P=probability=ψ*ψ

x=spatial dimension

A=constant

N=integer

h=Planck's constant

p=momentum

Conclusion: if ψmin(x0)=0, its first derivative derives a singular, local maximum or minimum there, but its neighboring points do not, unless ψ(x)=0 for all x.

The thing though is that with physics, the discussion is about what to do with regards to the issue of having one theory in continuous space (General Relativity) and another in discrete space (Quantum Field Theory).

Now I've been reading a little bit about this lately and one approach that is being used is to 'quantize' GR in which you basically get the field of Quantum Gravity.

This approach in my mind makes more sense than trying to make QFT continuous. My reasons for thinking this way is that we already know that all of the interactions and subsequently all the energy calculations work in a quantize way so at least to me it doesn't make sense to have an embedded set that describes the space to be continuous either.

For the above, it's like for example taking a Diophantine system and then describing the sets for describing the domain and codomain to be real numbers. This is completely un-necessary because you know that for this kind of thing you are only going to deal with finite numbers of states when you look at a finite subregion of the entire state-space for that particular process.

So based on this line of reasoning (which may be right or wrong and I'd love to hear your comments), then the next thing to do is to find a quantization scheme for space-time which is what many people are working on currently in many ways.

What this will do is essentially force the probability distribution to be non-continuous, but the real question lies in the way that it will be discontinuous.

See the thing is that you can't just quantize the space in the regular way that you would say quantize a 3D cartesian geometry by quantizing each axis individually. The problem with doing that is that not only are dealing with non-euclidean space-times, but we are also dealing with quite a number of interactions that ultimately will define the actual quantization procedure of space-time itself.

Personally one way I would approach this quantization is from a number-theoretic viewpoint because if a quantization scheme had to exist for a completely quantized system, then it means that for this quantization scheme the solutions to the Diophantine equations that specify that system would have to make sense in the way that all the solutions that are meant to exist corresponding to results in this physical reality actually do exist and also just as equally important, all the results that do not exist also don't exist in the Diophantine system.

So if you were to go this route, then the first thing would be to think about ways of expressing a Diophantine form of the system (it will have probabilistic properties of course) and then through the probabilistic description of the Diophantine system, then generate some useful probability definitions of a specific part of the system, like a particle like an electron.

One of the tricks to model the kind of behaviour you find in Diophantine systems that take place in continuous systems is to use the Dirac Delta function. This 'infinite-spike' allows you to model the behaviour of a finite field when you are dealing with a continuous state-space. When you have a natural space that is discrete, this isn't needed and you can get all the kinds of discrete behaviours when you consider something like a Diophantine system to model a process (and it's important to note that it can be made probabilistic).

So my question to you is, will you continue to work in a continuous framework meaning that you have to deal with all these issues related to Dirac-Delta spikes, discontinuities of every sort and the consequences of such, or are you willing to go the other way and assume a completely discrete framework and as a result use number theory (and it's probabilistic variant) to do physics instead?
 
  • #46
Special relativity imposes a relative speed limit of light speed c. General relativity, Georges Lemaître posited, has no relative speed limit for the universe. Particle horizons proceed toward us from a theoretical big bang in reverse order of their creation. The singular big bang, relative to us, may actually stretch across the celestial sphere. The distance of the singularity from us could well determine our physical universe. Whether the big bang is now out to infinity or at a finite horizon has affected particle creation, the evolution of forces, physical constants and the (local) geometry of our spacetime.

Think of cars accelerating from a stop. The cars behave much like galaxies moving according to the Hubble distance expansion, approximately r=c/H0, where r is the relative distance a galaxy is from us, c the speed of light and H0 the Hubble constant, about 70 (km/s)/Mpc. (That is, kilometers per second per megaparsec.) The farther one travels outward, the faster one expands relative to home base. If the law holds, eventually the traveler reaches the event horizon, where, like a black hole, Earth-light does not have the energy to continue (but there the traveler might find himself in a sea of Hawking radiation thanks to his investment).

Close to home we observe some rotational, then somewhat peculiar (random) expansion of the galaxies, farther on the moderate "Hubble law" escape, then the many named accelerative outward expansion, first found by supernova measurements. While our universe rushes away from us (and does so wherever we happen to be) the big bang remnant, singular as ever, has rained particles (albeit diminished) upon us. The microwave background is one remnant -- recombination of electrons and protons to create hydrogen. This happens in the lab at 3000K, which when divided by 2.7K, just happens to yield the redshift (Z≈1000) of the MBR.

The question remains, how does the ultimate outward cosmic background radiation (CBR, not just from microwave horns) correspond to the inner one of particle accelerators? When we look to the sky we see a rain of photons, when we look to the ground we feel the pull of gravitons. What might be interesting to measure is the entropy of the outer flow against that of the inner. Pointing our telescopes farther unravels earliest times; nearer do our microscopes enable uncertainty. We learn that out of high energy condense the quanta of fundamental forces.
 
  • #47
Loren Booda said:
Special relativity imposes a relative speed limit of light speed c. General relativity, Georges Lemaître posited, has no relative speed limit for the universe. Particle horizons proceed toward us from a theoretical big bang in reverse order of their creation. The singular big bang, relative to us, may actually stretch across the celestial sphere. The distance of the singularity from us could well determine our physical universe. Whether the big bang is now out to infinity or at a finite horizon has affected particle creation, the evolution of forces, physical constants and the (local) geometry of our spacetime.

Think of cars accelerating from a stop. The cars behave much like galaxies moving according to the Hubble distance expansion, approximately r=c/H0, where r is the relative distance a galaxy is from us, c the speed of light and H0 the Hubble constant, about 70 (km/s)/Mpc. (That is, kilometers per second per megaparsec.) The farther one travels outward, the faster one expands relative to home base. If the law holds, eventually the traveler reaches the event horizon, where, like a black hole, Earth-light does not have the energy to continue (but there the traveler might find himself in a sea of Hawking radiation thanks to his investment).

Close to home we observe some rotational, then somewhat peculiar (random) expansion of the galaxies, farther on the moderate "Hubble law" escape, then the many named accelerative outward expansion, first found by supernova measurements. While our universe rushes away from us (and does so wherever we happen to be) the big bang remnant, singular as ever, has rained particles (albeit diminished) upon us. The microwave background is one remnant -- recombination of electrons and protons to create hydrogen. This happens in the lab at 3000K, which when divided by 2.7K, just happens to yield the redshift (Z≈1000) of the MBR.

The question remains, how does the ultimate outward cosmic background radiation (CBR, not just from microwave horns) correspond to the inner one of particle accelerators? When we look to the sky we see a rain of photons, when we look to the ground we feel the pull of gravitons. What might be interesting to measure is the entropy of the outer flow against that of the inner. Pointing our telescopes farther unravels earliest times; nearer do our microscopes enable uncertainty. We learn that out of high energy condense the quanta of fundamental forces.

Can you give me some specific equations to look at?

Again I am not a physicist, but I do know a little bit about mathematics.

One thing that is interesting is that there is an idea that the universe is actually holographic. Now if this is the case structurally (like the interference pattern you get when you look at a real holographic film itself), then this has huge consequences for entropy.

In order for a hologram to retain its structural integrity (in terms of the actual information it represents), what this means is that there is basically a form of global entanglement. The effects on entropy are very big since if we are able to reduce some or all of the information for some finite sub-region of our state-space, then it means that changes will propagate through the entire system in both microscopic and macroscopic manners.

Now again, I have to point out that I am not a physicist you will have to give me equations and if possible, also a bit of extra context behind your question to give me some physical intuition.

Also the holographic nature if it exists in a kind of 'space-time' manner also means that the entanglement is not prevalent for things at one 'slice' of time, but rather across space-time as a whole. The effects of this kind of entanglement, if it existed, would mean that not only would it be seen in entropy calculations, but also that if it had the properties of a hologram information packet, that you could experimentally check whether the entropy pattern matches that of a hologram as well. This would be a nice physics experiment ;)

With regard to the evolution of forces, to put this into context of entropy, again you have to see where conditional entropies are minimized not only under the raw data, but also under transformations as well.

The thing is that if there is an order that is being created (remember there can be many many different orders in a highly complex system with many interactions going on) then what you would do is to extract a significant order and make an inference about what is happening. You would want to extract orders that minimize entropy in a maximized state-space for the highest conditional order possible (when I say conditional order I mean with respect to a joint distributions that has a higher number of initial states with respect to the rest of the states.)

In terms of the evolution of not only the physical state itself in space-time but also the forces, again you have to see where the order is.

If you want to conjecture why a particular set of 'forces' have been chosen, then again relate these to state-space in terms of the best orders that can be obtained. If it turns out that the orders vanish, or if the system 'blows up' and becomes 'unstable' with respect to existing orders that are extrapolated from the current system, then you have a way of contextually describing when you interpret what the orders mean 'in english' from their mathematical counterparts why the forces 'are what they are' vs 'are what they could be'. This kind of thing would strengthen what you know as the 'Anthropic Principle' and other ideas similar to it.

For the Hubble stuff, it would be helpful to give some equations and if possible some extra context to what you are saying. Again I'm not a physicist.

Finally with respect to your last statement, again I don't see things in terms of gravitons, or other force communicators required to make physical intuition: I see things mathematically in the most general non-local manner possible. In terms of physical intuition, it is not preferrable to do it this way because physics is a very specific endeavor that is rich of complexity at even the smallest scales and for specificity and clarification, requires one usually to see things in a local context.

Now the above might sound arrogant, but the reason I say this is because with my background and experiences, do not for whatever reason see things this way. I see things from a different perspective which can be beneficial and not so beneficial, just as every perspective has its benefits and limitations.

It would be interested to also get your feedback as well on my responses if you don't mind just to get some relativity for my comments. :)
 
  • #48
Loren Booda said:
Is an infinite series of [nonrepeating] random numbers possible?

That is, can the term "random" apply to a [nonrepeating] infinite series?

It seems to me that Cantor's logic might not allow the operation of [nonrepeating] randomization on a number line approaching infinity.

Technically, no. Eventually, if it is truly infinite, after all the googolplexes of combinations of numbers, it will repeat. Randomness is only based on the time that you study it for. If you have 0.1256627773728172818918268162, that obviously doesn't repeat. But if you let it continue, it will repeat eventually.
 
  • #49
AntiPhysics said:
Technically, no. Eventually, if it is truly infinite, after all the googolplexes of combinations of numbers, it will repeat. Randomness is only based on the time that you study it for. If you have 0.1256627773728172818918268162, that obviously doesn't repeat. But if you let it continue, it will repeat eventually.

What about a number like the decimal expansion of pi?
 
  • #50
AntiPhysics said:
Technically, no. Eventually, if it is truly infinite, after all the googolplexes of combinations of numbers, it will repeat. Randomness is only based on the time that you study it for. If you have 0.1256627773728172818918268162, that obviously doesn't repeat. But if you let it continue, it will repeat eventually.

That is NOT true. Only rational numbers repeat eventually.
 
Back
Top