# The mathematics of awareness

Has anyone developed a mathematical theory of awareness? For example, if we restrict ourselves to sets, is there a theory/model of awareness? Is the empty set "aware" of the set {a,b,c}? Seems like there should be degrees to which one set is aware of another set; that the awareness should be measurable (not in the sense of measure theory but perhaps).

In regards to Max Tegmark's big TOE, where ME=PE, the next step is to apply the mathematics of awareness of one structure for another to one structure's self-awareness.

It just seems natural to develop awareness in general first and then apply that to one structure and itself.

## Answers and Replies

Perhaps this belongs in TD or Math but I thought some philosopher would like to step in and provide some insights on awareness.

I may have said this earlier, but let's restrict to sets for now.

A is some function of two variables which encapsulates the awareness the first input (variable) has of the second input.

Given two sets X and Y, A(X,Y) can be the set of functions from X to Y having at least one fixed point.

(What I want to do is come up with something reasonable, find some properties on it, and then maybe drop what that something is in favor of the properties as axioms.)

Then the complexity and/or cardinality of A(X,Y) is some rough measure of the awareness X has of Y.

Example 1. X={a,b}, Y={1,b}. The set of all functions from X to Y can be listed out:
{(a,1),(b,b)}, {(a,b),(b,1)}.

Then the set of functions having at least one fixed point is this: {{(a,1),(b,b)}}. This is A(X,Y).

Note how this contains some information about the similarities of X and Y, namely the fixed point (and point of intersection) b, and some information about their differences, namely a and 1. The b increases the awareness X has for Y while the a & 1 detract from the awareness X has for Y.

Example 2. X={a,b}, Y={b}. The set of all functions from X to Y can be listed out:
{{(a,b),(b,b)}}. The function in this set has a fixed point, so A(X,Y)={{(a,b),(b,b)}}. Note that the set of all functions from X to Y equals A(X,Y).

Since A(X,Y) is a subset of the set of functions from X to Y, it is "biggest" when it is equal to the set of functions from X to Y. Perhaps when this happens, we can say that X is maximally aware of Y and when A(X,Y) is nonempty, we can say X is aware of Y.

A(R,R) is neither empty nor its maximum which would imply that R does not have maximal self-awareness.

Hmm... I wonder how you could characterize the sets Z for which A(Z,Z) equals the set of functions from Z to Z.

Well this is just a shot in the dark. If you have another characterization of awareness, please do tell.

Hurkyl
Staff Emeritus
Gold Member
Well, the cardinality of your A(X, Y) can be computed knowing just |X|, |Y|, and |X & Y|. (& means intersection)

There are |Y|^|X| functions from X to Y.

To count the functions without a fixed point, note that anything in X-Y can be mapped to anything, and everything in X&Y can be mapped to anything in Y, except for one element.

I.E. there are |Y|^(|X| - |X&Y|) * (|Y| - 1)^(|X & Y|) functions without fixed point.

The number of functions with fixed point is, of course, simply the difference. Clearly, they can only be the same if |Y| = 1.

(Hrm, there seem to be some assumptions in this argument -- check the boundary cases)

Now that that's out of the way, I really don't get what you're trying to model.

Thanks.

there are |Y|^(|X| - |X&Y|) * (|Y| - 1)^(|X & Y|) functions without fixed point.

There are |Y|^|X| functions from X to Y, right? Then the cardinality of the set of functions with at least one fixed point is (|Y|^|X|)-(|Y|^(|X| - |X&Y|) * (|Y| - 1)^(|X & Y|))?

Now that that's out of the way, I really don't get what you're trying to model.

I really don't get it (awareness), either.

How would you define awarness mathematically? To me, it seems clear that it should be a binary entity; that much models that X is aware of Y.

I don't even have the intuition of awareness down. Should we say that {1,2,3} is not aware at all of {a,b,c} whereas is it aware of R? Should it be a two-valued function that is either "yes" there is awareness and "no" there isn't or should it be something that somehow measures the extent to which X is aware of Y?

I vote for {1,2,3} not being aware of {a,b,c} (at all) but that it is aware of {1,b,c} and even more aware of {1,2,c} and maximally aware of {1,2,3}. Using the formula above:
|A({1,2,3},{a,b,c})|=(27)-(3^(3) * (3 - 1)^(0))=(27)-(27)=0.

|A({1,2,3},{1,b,c})|=(3^3)-(3^(3 - 1) * (3 - 1)^(1)) = (3^3)-(3^(3 - 1) * (3 - 1)^(1))=27-18=9.

|A({1,2,3},{1,2,c})|=(27)-(3^(3 - 2) * (3 - 1)^(2)) = 27-12=15.

|A({1,2,3},{1,2,3})|=27-8=19.

So the awareness goes up as they share more in common. Hmm...

I'm certainly open to alternatives.

matt grime
Homework Helper
You're the person who's decided to askl about awareness, so you need to decide what it means. It isnt a mathematical term so we can't even begin to guess what you may mean. AS it is all you seem to be asking is the cardinality of the intersection

Since the brain is a nonlinear machine, and self awareness is a form of self reference and even awareness of other things, is made with reference to the "self", it appears that the awareness equations are nonlinear?

[EXTERNAL INPUT]--->[MIND]<--->[INTERNAL INPUT]

Last edited:
matt grime
Homework Helper
Russell E. Rierson said:
Since the brain is a nonlinear machine,

would you care to justify or explain this in a mathematical sense?

and self awareness is a form of self reference and even awareness of other things, is made with reference to the "self", it appears that the awareness equations are nonlinear?

[EXTERNAL INPUT]--->[MIND]<--->[INTERNAL INPUT]

you have equations of awareness? what on earth is mathematical awareness?

matt grime said:
would you care to justify or explain this in a mathematical sense?

[...]

you have equations of awareness? what on earth is mathematical awareness?

By understanding the self reference and self-reflection of formal systems, as exemplified by Goedel's theorem, mathematics can be used to model the reflexivity, associated with self, consciousness, subjectivity, and thus self-awareness.

arildno
Homework Helper
Gold Member
Dearly Missed
I have a faint hope that eventually, people will develop the attitude that there exist structures so complicated that one can't say anything intelligent about them (as yet..)..

matt grime
Homework Helper
Russell E. Rierson said:
By understanding the self reference and self-reflection of formal systems, as exemplified by Goedel's theorem, mathematics can be used to model the reflexivity, associated with self, consciousness, subjectivity, and thus self-awareness.

I'll take that as a 'no', then.

Staff Emeritus
Gold Member
Dearly Missed
matt grime said:
I'll take that as a 'no', then.

It is known from experiment that neurons respond nonlinearly; you should be able to look that up for yourself if you doubt it. Since they are linked together, they could be idealized as something like the Fermi-Pasta-Ulam (FPU) device, which is mappable into the nonlinear KdV equation on a lattice, and which has a rich variety of non-ergodic, nonlinear solutions including solitons.

matt grime
Homework Helper
I didn't say that some no one has figure out equations that model neural path ways that were non-linear, nor did I imply they ought to be linear if they exist (while applying for a job in a computer science lab I did some research on these things, though to me their fuzzy logic models seem like reinventing the wheel, or probabilitty theory anyway, for people who can't do maths) nor did I say no one had done neural experiments that have non-linear behaviour.

I asked if Russell would mind justifying his post since it contains a number of mathematically dubious statements not to mention undefined terms that scream "crank", and I'm more than a little sceptical following his posts on Fermat's Last Theorem.

arildno, I think mathematicians, or at least this mathematician, already think there are things that defy intelligent discussion.

Just as an example, consider the Juila set of a polynomial over C[x_1,x_2,...,x_n] where n is a googleplex, so there are googleplex variables, whose total degree is the highest number you can think of. I don't know, maybe this can be discussed intelligently but you get the idea. There are already things that defy intelligent discussion.

I hope that awareness is not such a thing that defies intelleligent discussion.

Still restricting ourselves to sets...

Do we want to say {1,2,3} has any awareness of {a,b,c}?
Do we want to say that {1,2,3} has awareness of {1,2,3}?

I figure that working with finite sets will be easier but I could be wrong.

Yes, matt, it has something to do with interesection but it is more than that. IMO, awareness should contain information on the intersection and the symmetric difference.

For example, the awareness {1,2,3} has of {1,b,c} should be drastically different from the awareness (-oo,1] has of [1,oo). The intersection of both pairs of sets is {1} yet the extent to which they differ is "larger" for the second pair.

As an alternate to the fixed point definition which is just a shot in the dark, how about for two sets X and Y, A(X,Y) is the ordered pair (X&Y,X$Y) where$ denotes symmetric difference. I don't know.

Whatever A(X,Y) is, I want there to be some kind of ordering so that A(X_1,Y_1) can be compared to A(X_2,Y_2) while also containing information about how much is in common and how much is different. The original definition does this. A(X,Y) was the subset of Y^X of functions with at least one fixed point. In order for there to be a fixed point, X&Y can't be empty. On the other hand, everything a function in A(X,Y) doesn't fix is potentially a point in the symmetric difference.

Perhaps I jumped the gun by actually writing down a definition. Perhaps the questions should be answered first:
1. Is {1,2,3} aware of {1,b,c}?
2. Is {1,2,3} more aware of {1,2,c}?

3. Is {1,2,3} aware of N? of R?
4. If yes, "more" aware of N than of {1,2,c}?

5. Is {1,2,3} more/less/as aware of {1,2,3} than R is of R?

Once we answer the questions, perhaps we can develop axioms for A(X,Y) and then hunt for a definition of A(X,Y).

My answers would be:
1. yes
2. yes
3. yes and yes
4. more
5. less

I would also want to speculate that under some assumptions the Whitney embedding theorm implies that if the universe is a 11 D manifold then it can be embedded in R^n where n=2(11)+1=23. Thus I would argue that if Max is correct, we will find human-style self awareness structure in a subset of R^23. I hope that's not "overly" speculative, whatever is meant by "overly"...

phoenixthoth said:
arildno, I think mathematicians, or at least this mathematician, already think there are things that defy intelligent discussion.

Just as an example, consider the Juila set of a polynomial over C[x_1,x_2,...,x_n] where n is a googleplex, so there are googleplex variables, whose total degree is the highest number you can think of. I don't know, maybe this can be discussed intelligently but you get the idea. There are already things that defy intelligent discussion.

I hope that awareness is not such a thing that defies intelleligent discussion.

Still restricting ourselves to sets...

Do we want to say {1,2,3} has any awareness of {a,b,c}?
Do we want to say that {1,2,3} has awareness of {1,2,3}?

I figure that working with finite sets will be easier but I could be wrong.

Yes, matt, it has something to do with interesection but it is more than that. IMO, awareness should contain information on the intersection and the symmetric difference.

For example, the awareness {1,2,3} has of {1,b,c} should be drastically different from the awareness (-oo,1] has of [1,oo). The intersection of both pairs of sets is {1} yet the extent to which they differ is "larger" for the second pair.

As an alternate to the fixed point definition which is just a shot in the dark, how about for two sets X and Y, A(X,Y) is the ordered pair (X&Y,X$Y) where$ denotes symmetric difference. I don't know.

Whatever A(X,Y) is, I want there to be some kind of ordering so that A(X_1,Y_1) can be compared to A(X_2,Y_2) while also containing information about how much is in common and how much is different. The original definition does this. A(X,Y) was the subset of Y^X of functions with at least one fixed point. In order for there to be a fixed point, X&Y can't be empty. On the other hand, everything a function in A(X,Y) doesn't fix is potentially a point in the symmetric difference.

Perhaps I jumped the gun by actually writing down a definition. Perhaps the questions should be answered first:
1. Is {1,2,3} aware of {1,b,c}?
2. Is {1,2,3} more aware of {1,2,c}?

3. Is {1,2,3} aware of N? of R?
4. If yes, "more" aware of N than of {1,2,c}?

5. Is {1,2,3} more/less/as aware of {1,2,3} than R is of R?

Once we answer the questions, perhaps we can develop axioms for A(X,Y) and then hunt for a definition of A(X,Y).

My answers would be:
1. yes
2. yes
3. yes and yes
4. more
5. less

I would also want to speculate that under some assumptions the Whitney embedding theorm implies that if the universe is a 11 D manifold then it can be embedded in R^n where n=2(11)+1=23. Thus I would argue that if Max is correct, we will find human-style self awareness structure in a subset of R^23. I hope that's not "overly" speculative, whatever is meant by "overly"...

Can you give me a detailed explanation of truth?

LET'S PUT ALL YOUR MATHEMATICAL MINDS TO GOOD USE!

You have all dsiplayed an impressive set of mathematical minds. Now, let's put them to good use. Since this project is about mathematicisation of 'AWARENESS'. Let's design a set of 'PARAPLEXES'. A Paraplex is simply a perfect part of any design, or system or entity. It has the following fundamental characteristics:

1) It can only perform one function, and one function only. It cannot perform more than one function.

2) It is 'FUNCTION-CRITICAL. This means that a paraplex is a part that when removed from a system renders the entire system to a hault. The system just stops working.

2) Because it is a perfect part of a whole, it is structurally and fucntionally indestructible. You cannot reverse-engineer it.

The standard theory is that:

A system made only of paraplexes is structurally and functionally perferct.

The question now is, can you guys formulate and formalise procedures for designing a paraplex, let alone a set of them? My arguement is that, you cannot axiomatise, mathematicise a mind in a perfect way without formulating and formalising paraplexorial procedures. Note that this is only possible if you are one of those who accept the notion of 'DIVISIBILITY OF THE MIND' or the notion of a 'MULTI-PARTITE MIND'. This is so true because some versions of DUALISM hold that not only is the MIND 'NON-PHYSICAL' or 'IMMATERIAL' but also that the mind is 'INDIVISIBLE'. So, axiomatixing or mathematicising Awareness, you will have take all these into account.

So, here are the key points.

* you cannot design a mind or awarenes proper, let alone a 'PERFECT' one, without paraplexes,
* you cannot axiomatise or mathematicise steps for designing a paraplex.
without accepting the concept of a divisible mind.

If you fall into the trap of joining the bandwagon of accepting the notion of a NON-DIVISIBLE MIND, then kiss goodbye to the dream of mathematicising awareness, let alone designing a 'PARAPLEXED MIND'.

Last edited:
matt grime
Homework Helper
You could then define the awareness A(X,Y) to be the triple (card(XnY),card(X),card(Y)) which seems to contain all the information you wish, and can be lexicograhpically ordered.

and russell, why are you pointing me to a basic introduction to something I have a degree in?

matt grime said:
You could then define the awareness A(X,Y) to be the triple (card(XnY),card(X),card(Y)) which seems to contain all the information you wish, and can be lexicograhpically ordered.

and russell, why are you pointing me to a basic introduction to something I have a degree in?

http://sulcus.berkeley.edu/FreemanWWW/manuscripts/IF8/99.html

Abstract

To explain how stimuli cause consciousness, we have to explain causality. We can't trace linear causal chains from receptors after the first cortical synapse, so we use circular causality to explain neural pattern formation by self-organizing dynamics. But an aspect of intentional action is causality, which we extrapolate to material objects in the world. Thus causality is a property of mind, not matter.

http://sulcus.berkeley.edu/FreemanWWW/manuscripts/wjfmanuscripts.html

Last edited by a moderator:
matt grime
Homework Helper
And, again, what's your point? Question about "awareness" between the two sets of real numbers and you point out something like this without explanation of what one is supposed to think you're talking about, and all aimed at me apparently. I've read plenty of papers like this, what's your point?

arildno
Homework Helper
Gold Member
Dearly Missed
Excuse me, Russel:
But are you not able to provide arguments on your own??

matt grime said:
And, again, what's your point? Question about "awareness" between the two sets of real numbers and you point out something like this without explanation of what one is supposed to think you're talking about, and all aimed at me apparently. I've read plenty of papers like this, what's your point?

I am thinking that "awareness" between sets of numbers is hard to define. Awareness might be non-computable?

matt grime
Homework Helper
They aren't even arguments, really. That last link points to a philsophical discussion about the nature of things in the physical world, and nothing to do with mathematical objects.

A(X,Y)=
(card(XnY),card(X),card(Y))
.

I want the class {A(X,Y): X and Y are sets} to be orderable. I don't wish it to be a well ordering or anything; just an ordering with trichotomy (is that what's called a linear order?).

You mentioned lex ordering; so would you say
A(X_1,Y_1) < A(X_2,Y_2) iff
card(X_1 n Y_1) < card(X_2 n Y_2) or
if card(X_1 n Y_1) = card(X_2 n Y_2) then
card(X_1) < card(X_2) or
if card(X_1 n Y_1) = card(X_2 n Y_2) and card(X_1) = card(X_2),
card(Y_1) < card(Y_2) ELSE
A(X_1,Y_1) >= A(X_2,Y_2) ?

Alternately, I want awareness to go up the more the two have in common and go down the more they differ (as in XOR). I know you can't subtract card(X'xor'Y) from card(XnY) but maybe some contrived 'thing' would go up with card(XnY) goes up and go down when card(X'xor'Y) goes down. Maybe the set of functions from X to Y having at least one fixed point does this. I was also thinking of the set of functions from X to Y for which XnY is a confining set (ie f(XnY) is a subset of XnY). Will play around and see...

matt grime