Is Time Merely Constant Change?

  • Thread starter Thread starter Outlandish_Existence
  • Start date Start date
  • Tags Tags
    Time
AI Thread Summary
The discussion centers on the perception and nature of time, with participants questioning whether time is an illusion or a fundamental aspect of reality. Many argue that what we perceive as time is merely a measurement of change, suggesting that everything is in a constant state of transformation rather than passing through time. The conversation references philosophical and scientific perspectives, including ideas from notable figures like Stephen Hawking and Julian Barbour, to support the notion of a dimensionless universe where time and space may not exist independently. Participants express a desire for deeper understanding of why change occurs and the implications of perceiving time as an illusion. Ultimately, the dialogue emphasizes the complexity of defining time and its relationship to change in the universe.
  • #451
Tosh said:
Each moment is a temporary physical object...is that ok?
"Time" is that which is intermediate between "moments"--each "momemt" is outside "time".
 
Physics news on Phys.org
  • #452
Rade said:
"Time" is that which is intermediate between "moments"--each "momemt" is outside "time".

What is the difference between the 'moment' and the 'intermediate'?
 
  • #453
Doctordick said:
Hi Anssi, I am sorry I confused you. Sometimes I write a lot without realizing the various ways what I write can be taken; to paraphrase an old cliche, there are more ways to misinterpret what is being said than is dreamt of in your philosophy (which is really the essence of our conversation and I, of all people, should remember that). It is no fault of yours but you have missed intended central point of my ramblings.

The essence of magic is the misdirection of attention and physics has much to do with magic (it makes a lot of sense unless you happen to question something they can not answer). It is often very easy to miss a simple point simply because other issues catch your attention so I perhaps shouldn't have put so many varied issues in a single post; but it does tend to reveal those misunderstandings so I suppose I can be excused. I hadn't intended to send you off on a wild goose chase through google.

Heh, don't worry about it. I educated myself little bit on the mathematical concepts you mention, and the post seems much clearer to me now. I think I even figured out what that tiny LaTeX scribble is :)

Tell me if I got it right;
We are looking for a function that would give us a probability for a certain specific input being found from the table. We should expect squaring & normalization to 1 to be an important part of that function, as long as we want the output to be between 0 and 1. That's basically the gist of it, right?

Plus, whatever that proposed function is, will also determine what numbers apart from the given input we would expect to be possible entries at that specific "t" (~if we were to believe it is "valid" function). Sum over all these possibilities and so forth. Functions yielding the sum of "0" would indeed seem rather invalid :)

Could I ask what browser you are using? I am using “FireFox” in its default mode and the font in the LaTex expressions seems to be actually larger than the font in the main text. Maybe you have some preference set strangely. Sorry I can't help as I am quite ignorant of such things but quite surprised to hear of your difficulty. All the windows machines and “the Internet Explorer” seem to yield about the same result.

I'm using IE7. I'm sure it is displayed the same way in every machine since LaTeX seems to just generate a bitmap image. It just generates some numbers and symbols little bit blurred (even when I've zoomed in), and when I don't know know what to expect, I can't be sure what everything is. I checked out "probability density" and integrals, and now it's obvious that's X1 = infinity & Xn = infinity etc :)

The only reason I even bring up quantum mechanics is that it is the most successful theory ever proposed and, by the time we finish, it will be quite obvious why it is so successful. What I am presenting to you is actually a logical deduction of quantum mechanics itself. Along with that, I will show you some subtle flaws in modern physics and their perspective on quantum mechanics.

Okay, onwards...

By the way, the single most significant question asked by most scientists is, “where do we go from here?” That question makes the implicit assumption that “where we are” is significant. That is not the question I ask; I simply ask, where should we be going? What is important about the difference is that “where we are” can have no bearing on the answer; the answer must be universal.

You mean, we shouldn't burden ourselves unnecessarily by how we have chosen to describe reality thus far?

-Anssi
 
  • #454
Siah said:
What is the difference between the 'moment' and the 'intermediate'?

See my thoughts below from another thread on "time":

As I see it, "time" is defined by "moments", time is not composed of moments, thus "moments" are outside of time but are the bounds of time, and the bounds of time are the "nows" (outside time). This must be true because time is divisible (continuous) but moments are not divisible. So, suppose two discrete moments A & C and also some continuous time [E-G]. Now A and C are not in motion (nor in rest) but they form the begin and end of the time [E-G]. Now, since A and C are contrary things (begin and end), like black and white, they can contain something intermediate between them, and that which is intermediate between the two discrete moments A (begin) and C (end) is [E-G] = time, just as that which is intermediate between black and white = grey. Now by "between" it means that time [E-G], after the moment A, must first reach some B before C, thus time must always be "between" the two moments A (begin) and C (end), for there is nowhere else for it to be since it is neither at A nor C. Thus the reason I stated: That which is intermediate between moments IS TIME.. fyi--this argument derived from my understanding of concept of time of Aristotle.

Edit: From another thread I made this claim:

If, following Aristotle on time, we consider that "that which is intermediate between existents is space", then perhaps "that which is intermediate between moments of existents is space-time" ? To which the reply by Plastic Photon: And if 'is intermediate between existents' is taken to mean 'on a closed interval', time never ends, thus, space-time never end
 
  • #455
Hi again Anssi. Now that you mention it, some of those LaTex symbols do get small. I guess I don’t notice it because I know what is intended. Sorry about that.
AnssiH said:
You mean, we shouldn't burden ourselves unnecessarily by how we have chosen to describe reality thus far?
If by, “how we have chosen to describe reality thus far”, you mean your world view, then you understand exactly what I meant.

There are a few other minor details which will have to be cleared up sooner or later but for the moment, I would like to get over to that symmetry issue as I think you understand enough of my attack to understand it. At the moment, I have defined the knowledge on which any explanation must depend as equivalent to a set of points in an (x, tau, t) space: i.e., a collection of numbers associated with each t index which I have referred to as B(t). Any explanation can be seen as a function of those indices (the explanation yielding a specific expectation for that set of indices at time t. The output of that function is a probability and may be written

P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)

Now, the thoughts we need to go through here are subtle and easy to confuse but I think you have the comprehension to follow them. Suppose someone discovers a flaw free solution to the problem represented by some given collection of ontological elements. That means that their solution assigns meanings to those indices used in P. But, if we want to understand his solution, we need enough information to deduce the meanings he has attached to those indices. It is our problem to uncover his solution from what we come to know of the patterns in his assignment of indices. The point being that the solution (which has to contain the definitions of the underlying ontological elements) arises from patterns in the assigned indices. And the end result is to yield a function of those indices which is the exact probability assigned to that particular collection implied by that explanation.

But the indices are mere labels for those ontological elements. If we were to create a new problem by merely adding a number a to every index, the problem is not really changed in any way. Exactly the same explanation can be deduced from that second set of indices and it follows directly that

P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,x_3+a,\tau_3+a,\cdots,x_n+a,\tau_n+a,t)

must yield exactly the same probability. That leads to a very interesting equation.

P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,\cdots,x_n+a,\tau_n+a,t)-P(x_1+b,\tau_1+b,x_2+a,\tau_2+b,\cdots,x_n+b,\tau_n+b,t)=0

Simple division by (a-b) and taking the limit as that difference goes to zero makes that equation identical to the definition of a derivative. It follows that all flaw free explanations must obey the equation.

\frac{d}{da}P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,x_3+a,\tau_3+a,\cdots,x_n+a,\tau_n+a,t)=0

Let me know if you have any problems with that. I will be out of town for the next few weeks but I will try to get to the forum when I get access to the web but don't expect quick responses.

Have fun -- Dick
 
Last edited:
  • #456
Rade said:
As I see it, "time" is defined by "moments", time is not composed of moments, thus "moments" are outside of time but are the bounds of time,
Are 'moments' composed of time? If not, what are they composed of?
 
  • #457
Siah said:
Are 'moments' composed of time? If not, what are they composed of?
NO, moments are not composed of time, moments are an "attribute" of time. An attribute is something that is not the entity itself, yet the entity and attribute are not two different things. A "moment" as an attribute of "time" is what can be separated only mentally from time--as opposed to a "part" which can be materially separated from the whole. It is not possible to have a concept of "moment" without a concept of "time", nor a concept of "time" without a concept of "moment". Moments are like electrons, they are "composed" of themselves. Moments, like all attributes of entities, are indivisible. Moments are the "now", the "present". Moments are the limit of the "past" and "future"--the "before" and "after". Moments are infinite in number.
 
  • #458
Rade said:
NO, moments are not composed of time, moments are an "attribute" of time. An attribute is something that is not the entity itself, yet the entity and attribute are not two different things. A "moment" as an attribute of "time" is what can be separated only mentally from time--as opposed to a "part" which can be materially separated from the whole. It is not possible to have a concept of "moment" without a concept of "time", nor a concept of "time" without a concept of "moment". Moments are like electrons, they are "composed" of themselves. Moments, like all attributes of entities, are indivisible. Moments are the "now", the "present". Moments are the limit of the "past" and "future"--the "before" and "after". Moments are infinite in number.

Isn't time just a rudimentary form of calculus or calculus of variations. Time, in this sense would then be the result of early human studies of the rate of change. How far off am I? Its been my explanation for time all along so I'm biased.:rolleyes:
 
  • #459
Doctordick said:
If by, “how we have chosen to describe reality thus far”, you mean your world view, then you understand exactly what I meant.

Yup.

There are a few other minor details which will have to be cleared up sooner or later but for the moment, I would like to get over to that symmetry issue as I think you understand enough of my attack to understand it. At the moment, I have defined the knowledge on which any explanation must depend as equivalent to a set of points in an (x, tau, t) space: i.e., a collection of numbers associated with each t index which I have referred to as B(t). Any explanation can be seen as a function of those indices (the explanation yielding a specific expectation for that set of indices at time t. The output of that function is a probability and may be written

P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)

Now, the thoughts we need to go through here are subtle and easy to confuse but I think you have the comprehension to follow them. Suppose someone discovers a flaw free solution to the problem represented by some given collection of ontological elements. That means that their solution assigns meanings to those indices used in P. But, if we want to understand his solution, we need enough information to deduce the meanings he has attached to those indices. It is our problem to uncover his solution from what we come to know of the patterns in his assignment of indices. The point being that the solution (which has to contain the definitions of the underlying ontological elements) arises from patterns in the assigned indices. And the end result is to yield a function of those indices which is the exact probability assigned to that particular collection implied by that explanation.

But the indices are mere labels for those ontological elements. If we were to create a new problem by merely adding a number a to every index, the problem is not really changed in any way. Exactly the same explanation can be deduced from that second set of indices and it follows directly that

P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,x_3+a,\tau_3+a,\cdots,x_n+a,\tau_n+a,t)

must yield exactly the same probability. That leads to a very interesting equation.

P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,\cdots,x_n+a,\tau_n+a,t)-P(x_1+b,\tau_1+b,x_2+a,\tau_2+b,\cdots,x_n+b,\tau_n+b,t)=0

Simple division by (a-b) and taking the limit as that difference goes to zero makes that equation identical to the definition of a derivative. It follows that all flaw free explanations must obey the equation.

\frac{d}{da}P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,x_3+a,\tau_3+a,\cdots,x_n+a,\tau_n+a,t)=0

Let me know if you have any problems with that.

It took me a while to figure out the mathematical expressions, but thank god for Wikipedia :) I studied derivatives and differentiation, and with that limited understanding, I cannot see a fault in the above. But what does it mean? Something being symmetric in our models, implies there is invalid ontological element in use? Hmmm, I think I can see some kind of relationship between this and the artificial concepts in our worldviews (mental models of reality).

Well, how would you put it, what does this say about "symmetry"?

-Anssi
 
  • #460
Rade said:
NO, moments are not composed of time, moments are an "attribute" of time. An attribute is something that is not the entity itself, yet the entity and attribute are not two different things. A "moment" as an attribute of "time" is what can be separated only mentally from time--as opposed to a "part" which can be materially separated from the whole. It is not possible to have a concept of "moment" without a concept of "time", nor a concept of "time" without a concept of "moment". Moments are like electrons, they are "composed" of themselves. Moments, like all attributes of entities, are indivisible. Moments are the "now", the "present". Moments are the limit of the "past" and "future"--the "before" and "after". Moments are infinite in number.

I am trying to clarify this earlier statement:
"Time is that which is intermediate between moments"
You say 'moments are an "attribute" of time. As I understand it you are saying that moments have a time-span. Is this correct?
 
  • #461
Siah said:
I am trying to clarify this earlier statement:
"Time is that which is intermediate between moments"
You say 'moments are an "attribute" of time. As I understand it you are saying that moments have a time-span. Is this correct?
No, this is not how I see it. Moments do not have a "time-span"--moments are not divisible, thus no span concept exists for moments. To be "between" logically requires a concept of three. Suppose two moments (A) and (D) at the present, the now. "Time" (B ---> C) is that which is intermediate between the moments, time is neither within A nor D as the present, A and D are limits of time (B----> C). So you see the concept of three--this is what I mean when I say "time is intermediate between moments": (A) | (B ---> C) | (D).
 
  • #462
AnssiH said:
Well, how would you put it, what does this say about "symmetry"?
The equation is a direct consequence of “symmetry”. The addition of a to every term in a collection of reference numbers is essentially what is normally referred to as a “shift symmetry”. With regard to symmetry, I think I already gave you a link to a post I made to “saviormachine” a couple of years ago (post number 696 in the “Can everything be reduced to physics” thread.”) That post, selfAdjoint’s response to it (immediately below that one) and my response to selfAdjoint’s (post number 703) should be read very carefully before googling around. I will paste one quote which I think is the central issue here.
Doctordick said:
My interest concerns an aspect of symmetry very seldom brought to light. For the benefit of others, I will comment that the consequences of symmetry are fundamental to any study of mathematical physics. The relationship between symmetries and conserved quantities was laid out in detail through a theorem proved by http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Noether_Emmy.html sometime around 1915. The essence of the proof can be found on [URL='https://www.physicsforums.com/insights/author/john-baez/']John Baez's web site[/URL]. This is fundamental physics accepted by everyone. The problem is that very few students think about the underpinnings of the circumstance but rather just learn to use it. :frown:
What I feel everyone seems to miss is the fact that there exists no proof which yields any information which is not embedded in the axioms on which the proof is based.[/color] In fact, that comment expresses the fundamental nature of a proof! In my opinion, the fundamental underpinning of Noether’s proof is the simple fact that any symmetry can be seen as equivalent to the definition of a specific differential: i.e., in a very real sense, Noether’s theorem is true by definition as are all proofs.

I was somewhat sloppy when I wrote my last post because the issue was to get you to think about the impact of shift symmetry in ontological labels. It is very interesting to note that x, tau and t are all totally independent collections of indices (the fact that we have laid them out as positions in a three dimensional Euclidean space says that shift symmetry is applicable to each dimension independently). In other words, that equation can actually be divided into three independent equations.

\frac{d}{da}P(x_1+a,\tau_1,x_2+a,\tau_2,x_3+a,\tau_3,\cdots,x_n+a,\tau_n,t)=0

\frac{d}{da}P(x_1,\tau_1+a,x_2,\tau_2+a,x_3,\tau_3+a,\cdots,x_n,\tau_n+a,t)=0

\frac{d}{da}P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t+a)=0

I think you should find that quite satisfactory. If not, let me know what confusion it engenders.

The next step involves what is called “partial” differentiation. A partial differential is defined on functions of more than one variable (note that above we are looking at the probability as a function of one variable: i.e., only a is being presumed to change; all other variable being seen as a simple set of constants). When one has multiple variables, one can define a thing called the “partial” derivative. A partial derivative is the derivative with respect to one of those variables under the constraint that none of the other variables change (all other variables are presumed to be unchanging). Essentially, the equations above can be seen as partials with respect to a except for one fact: the probability P is not being expressed as a function of “a”. That is to say, “a” is not technically an argument of P.

On the other hand, the equation does say something about how the other arguments must change with respect to one another. In order to deduce the correct implied relationship, one needs to understand one simple property of partial derivatives. The property that I am referring to is often called “the chain rule of partial differentiation’. I googled “the definition of the chain rule of partial differentiation” and got a bunch of hits on “by use of the definition of the chain rule of partial differentiation …” which seems pretty worthless with regard to exactly what it is. If you know what it is, thank the lord. If you don’t, do you know anyone with enough math background to explain it to you? It is a lot easier to explain in person with a black board; but, if necessary, I will compose a document I think you can understand.

If anyone out there feels they can do the deed in a quick and dirty fashion I will accept the assistance. Or, if anyone can give Anssi a link to a good presentation of the definition, I would certainly appreciate it. Meanwhile, I will await your response.

Have fun -- Dick

PS I’m having a ball. Our first grandchild (we thought we would never get one) will be one year old Sunday and she can sure wear out an old man. She’s not quite walking yet (not by herself anyway) and wants to walk everywhere holding on to your finger (which requires me to walk bent over).
 
  • #464
Thank you Rade; those are all excellent links to good information on the chain rule and how it applies to functions of many variables. With regard to my presentation, the link to “case 1” of http://tutorial.math.lamar.edu/AllBrowsers/2415/ChainRule.asp[/url (your second reference) is the most directly applicable to my next step. Paul gives case 1 as the problem of computing dz/dt when z is given as a function of x = g(t) and y =h(t) or, to put it exactly as he states it, Case 1: z=f(x,y), x=g(t), y=h(t) and compute dz/dt).

What we want to do is compute is dP/da, which we know must vanish, but is expressed in terms of the reference labels of our valid ontological elements. We have established that the probability of a specific set of labels is given by an expression of the form,

Probability= P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)

or, just as reasonably

Probability= P(z_1,\tau_1,z_2,\tau_2,z_3,\tau_3,\cdots,z_n,\tau_n,t)

where our shift symmetry has resulted in the fact that those arguments, when expressed as functions of x and a are given by

z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.

With regard to our representation that dP/da vanishes, we can apply the example given by Paul,

\frac{dz}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dx}

as, in our case, equivalent to

\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial P}{\partial z_i}\frac{dz_i}{da};

however, in our case,

\frac{dz_1}{da}=\frac{dz_2}{da}=\frac{dz_3}{da}=\cdots=\frac{dz_n}{da}=1.

which yields the final result that

\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial}{\partial z_i}P = 0

when the x arguments of P are symbolized by z. But z is just a letter used to represent those arguments; one can not change the truth of the equation by changing the name of the variable. This same argument can be applied to the other independent arguments of P, yielding, in place of the differential expressions in post 462, the following three differential constraints.

\sum_{i=0}^{i=n}\frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

\sum_{i=0}^{i=n}\frac{\partial}{\partial \tau_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

and

\frac{\partial}{\partial t}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

which has utterly no mention of the shift parameter a.

If I get confirmation that the above is understood and accepted as a rational expectation from any mathematical expression of a flaw free explanation of the information represented by those ontological elements underlying that explanation, I will continue by showing you how all of the relationships so far developed can be seen as a single mathematical expression which must be obeyed by each and every flaw free explanation which can be constructed.

I am very much looking forward to your response -- Dick
 
Last edited by a moderator:
  • #465
Sorry for being so slow to reply again. I am having a summer vacation and was away for couple of days, and on top of that it takes me a while to figure out all the math concepts since I need to study them before I understand what is being said :)

Doctordick said:
Our first grandchild (we thought we would never get one) will be one year old Sunday and she can sure wear out an old man. She’s not quite walking yet (not by herself anyway) and wants to walk everywhere holding on to your finger (which requires me to walk bent over).

Heh, don't break your back :) I also became an uncle couple months back, plus my two other sisters are just about to multiply as well :)

Doctordick said:
What I feel everyone seems to miss is the fact that there exists no proof which yields any information which is not embedded in the axioms on which the proof is based.[/color] In fact, that comment expresses the fundamental nature of a proof! In my opinion, the fundamental underpinning of Noether’s proof is the simple fact that any symmetry can be seen as equivalent to the definition of a specific differential

Yeah that makes sense.

I was somewhat sloppy when I wrote my last post because the issue was to get you to think about the impact of shift symmetry in ontological labels. It is very interesting to note that x, tau and t are all totally independent collections of indices (the fact that we have laid them out as positions in a three dimensional Euclidean space says that shift symmetry is applicable to each dimension independently). In other words, that equation can actually be divided into three independent equations.

\frac{d}{da}P(x_1+a,\tau_1,x_2+a,\tau_2,x_3+a,\tau_3,\cdots,x_n+a,\tau_n,t)=0

\frac{d}{da}P(x_1,\tau_1+a,x_2,\tau_2+a,x_3,\tau_3+a,\cdots,x_n,\tau_n+a,t)=0

\frac{d}{da}P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t+a)=0

I think you should find that quite satisfactory.

Yeah, can't see any fault with that.

The next step involves what is called “partial” differentiation. A partial differential is defined on functions of more than one variable (note that above we are looking at the probability as a function of one variable: i.e., only a is being presumed to change; all other variable being seen as a simple set of constants). When one has multiple variables, one can define a thing called the “partial” derivative. A partial derivative is the derivative with respect to one of those variables under the constraint that none of the other variables change (all other variables are presumed to be unchanging). Essentially, the equations above can be seen as partials with respect to a except for one fact: the probability P is not being expressed as a function of “a”. That is to say, “a” is not technically an argument of P.

On the other hand, the equation does say something about how the other arguments must change with respect to one another. In order to deduce the correct implied relationship, one needs to understand one simple property of partial derivatives. The property that I am referring to is often called “the chain rule of partial differentiation’. I googled “the definition of the chain rule of partial differentiation” and got a bunch of hits on “by use of the definition of the chain rule of partial differentiation …” which seems pretty worthless with regard to exactly what it is. If you know what it is, thank the lord.

I didn't, but now I have some idea about it with the links Rade posted (thanks).

Doctordick said:
Paul gives case 1 as the problem of computing dz/dt when z is given as a function of x = g(t) and y =h(t) or, to put it exactly as he states it, Case 1: z=f(x,y), x=g(t), y=h(t) and compute dz/dt).

What we want to do is compute is dP/da, which we know must vanish, but is expressed in terms of the reference labels of our valid ontological elements. We have established that the probability of a specific set of labels is given by an expression of the form,

Probability= P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)

or, just as reasonably

Probability= P(z_1,\tau_1,z_2,\tau_2,z_3,\tau_3,\cdots,z_n,\tau_n,t)

where our shift symmetry has resulted in the fact that those arguments, when expressed as functions of x and a are given by

z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.

With regard to our representation that dP/da vanishes, we can apply the example given by Paul,

\frac{dz}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dx}

as, in our case, equivalent to

\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial P}{\partial z_i}\frac{dz_i}{da};

however, in our case,

\frac{dz_1}{da}=\frac{dz_2}{da}=\frac{dz_3}{da}=\cdots=\frac{dz_n}{da}=1.

Here I'm starting to have some troubles understanding what is being said. What is meant with \sum_{i=1}^{i=n} ? Something about this applying to every entry in the table?

I understood we are using z_i to express x_i+a, but I don't understand how \frac{dz_1}{da}=1

which yields the final result that

\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial}{\partial z_i}P = 0

Hmmm, that final result \frac{dP}{da}= 0
Isn't it the same as was established earlier already? I.e. changing "a" will not change the probability P?

when the x arguments of P are symbolized by z. But z is just a letter used to represent those arguments; one can not change the truth of the equation by changing the name of the variable. This same argument can be applied to the other independent arguments of P, yielding, in place of the differential expressions in post 462, the following three differential constraints.

\sum_{i=0}^{i=n}\frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

\sum_{i=0}^{i=n}\frac{\partial}{\partial \tau_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

and

\frac{\partial}{\partial t}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

which has utterly no mention of the shift parameter a.

Hmm, how should I read these expressions...? That the probability doesn't change when we change... what? I hope you (or anyone) can clear up the things I am not getting :)

-Anssi
 
  • #466
AnssiH said:
Sorry for being so slow to reply again.
Don’t worry about it.

Regarding “they are % and % is not a thing.
Anssi said:
Heh, isn't it interesting to try to force yourself through this barrier?
:) At least it gives us a better understanding about how there really is a barrier there, doesn't it?
In is interesting to note that this exchange concerns exactly what I am talking about: i.e., getting on the other side of that barrier. Tarika is using “%” for exactly the reason I am using numerical labels. The only reason I am using “numerical labels” is that there are a lot more of them then there are things like %, #, @, &, etc. Plus that, I have the advantage that there exists a world of internally self consistent defined operations on those numerical labels. That is, I don’t have to explain each and every manipulation I want to perform on the labels. (See Russell’s works on definition of mathematics.) You can google the phrase and get enough stuff to keep anyone busy for years. The only reason I bring it up is that he was very much interested in defining mathematics from “ground zero”. That is exactly the problem which constitutes the essential nature of the barrier being referred to above.
AnssiH said:
I am having a summer vacation and was away for couple of days, and on top of that it takes me a while to figure out all the math concepts since I need to study them before I understand what is being said :)
Yeah, I knew that was going to be a problem; but I think we are beginning to clear up the true depth of the difficulty. I think we can handle it.
AnssiH said:
Here I'm starting to have some troubles understanding what is being said. What is meant with \sum_{i=1}^{i=n} ? Something about this applying to every entry in the table?
The capital sigma is used as a shorthand notation to represent a sum. The definitions of i given above and below the sigma tell you the starting value of i and the ending value of i. The term to be summed has an i reference in it which tells you how to construct the ith term in that sum. If you look at Paul’s example (for Case 1) you will see that the original function was a function of two variables and that his “total derivative”, dz/dt, is given by a sum of two terms: a partial with respect with each of those two variables times the “total derivative of each variable with respect to t. (“Total derivative” is the term used for what was originally defined to be “a derivative” so as to contrast it with the idea of a “partial derivative”). In our case, we have n arguments subject to our shift parameter "a" so our total derivative consists of a sum of n terms, one partial for each term in the function (times the respective total derivative).

This defined operation (the thing called the partial derivative with respect to the given argument multiplied by the common derivative of the same argument with respect to a) is to be performed for every numerical label in the collection of labels which constitute the arguments of that probability function (the mathematical function which is to yield the probability that the specific set of labels will be in the table). The n different results which are obtained by performing that specific mathematical operation which (if we happen to know what the function looks like, will yield a new function for each chosen i) are to be added together.

The requirement that the shift of "a" cannot yield any change in that resultant expression yields a rule which the probability function can not violate. Putting it simply, if we did indeed know exactly the correct function for n-1 of those arguments, we could use that differential relationship to tell us exactly the appropriate relationship for the missing argument. This is a simple consequence of “self consistency” of the explanation.
AnssiH said:
I understood we are using z_i to express x_i+a, but I don't understand how \frac{dz_1}{da}=1
Our shift symmetry can be seen as a simple change in variables where each x has been replaced by a related z where each z has been defined by adding a to the respective x.

z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.

In order to evaluate the sum expressing the total derivative of P with respect to a (the derivative which we deduced earlier must vanish) we need the total derivative of each z with respect to a. But each z is obtained from a by adding a to the appropriate x. This constraint (as a function of a) presumes there is no change in the base x (as it is a shift on all x’s). From this perspective, each z can be see as a constant x plus a; it follows that dx/da vanishs (x is not a function of a) and da/da is identically one by definition.
AnssiH said:
Hmmm, that final result \frac{dP}{da}= 0
Isn't it the same as was established earlier already? I.e. changing "a" will not change the probability P?
Exactly right except for one thing. We haven’t proven dP/da = zero here; what we have done is shown how that result (as you say “established earlier”) is totally equivalent to the assertion that the sum over all partials with respect to each argument must vanish.

We first proved that we could see any specific explanation of our “what is”, is “what is”[/color] table as a mathematical function which would yield the probability of seeing a specific entry in that table. Then we argued that shift symmetry required that the total derivative with respect to that shift to vanish. Now I have shown that that requirement is totally equivalent to requiring a specifically defined sum of partial derivatives of that probability function, with respect to those numerical labels (numerical labels which are defined by that explanation), to vanish.
AnssiH said:
Hmm, how should I read these expressions...? That the probability doesn't change when we change... what? I hope you (or anyone) can clear up the things I am not getting :)
This says that every ontological element (valid or invalid) associated with “that explanation” has associated with it, another thing (a consequence of symbolic shift symmetry). If we have the function for the probability relationships and the numerical labels, we can deduce a proper label (numerical label) to be assigned to that ontological element. What is interesting is the fact that the sum over all those “deduced proper labels” must be zero. We are talking about here is a conserved quantity; the sum over all of them is unchanging though the individual quantities associated with each ontological element might very well change.
AnssiH said:
Heh, don't break your back :) I also became an uncle couple months back, plus my two other sisters are just about to multiply as well :)
Don’t worry, we’ve survived it. We will be heading home this weekend. That’s the great thing about being grandparents; you can always go home when the strain begins to show (and believe me, it's beginning to show; I am looking forward to our own schedule and our own home). You can’t do that with your own kids.

Have fun -- Dick
 
  • #467
Doctordick said:
Regarding “they are % and % is not a thing.
In is interesting to note that this exchange concerns exactly what I am talking about: i.e., getting on the other side of that barrier. Tarika is using “%” for exactly the reason I am using numerical labels.

Although he isn't thinking about finding any requirements or constraints for our ontological assumptions. He just said that because he actually stopped and thought of my assertion, and tried to see if it was water-proof. Anyhow, seems like people get some bad vibes from the word "barrier" in this context, for no good reason at all... (Makes us feel little bit retarded I guess? :)

Doctordick said:
Yeah, I knew that was going to be a problem; but I think we are beginning to clear up the true depth of the difficulty. I think we can handle it.
The capital sigma is used as a shorthand notation to represent a sum. The definitions of i given above and below the sigma tell you the starting value of i and the ending value of i. The term to be summed has an i reference in it which tells you how to construct the ith term in that sum. If you look at Paul’s example (for Case 1) you will see that the original function was a function of two variables and that his “total derivative”, dz/dt, is given by a sum of two terms: a partial with respect with each of those two variables times the “total derivative of each variable with respect to t. (“Total derivative” is the term used for what was originally defined to be “a derivative” so as to contrast it with the idea of a “partial derivative”). In our case, we have n arguments subject to our shift parameter "a" so our total derivative consists of a sum of n terms, one partial for each term in the function (times the respective total derivative).

This defined operation (the thing called the partial derivative with respect to the given argument multiplied by the common derivative of the same argument with respect to a) is to be performed for every numerical label in the collection of labels which constitute the arguments of that probability function (the mathematical function which is to yield the probability that the specific set of labels will be in the table). The n different results which are obtained by performing that specific mathematical operation which (if we happen to know what the function looks like, will yield a new function for each chosen i) are to be added together.

Okay I see.

The requirement that the shift of "a" cannot yield any change in that resultant expression yields a rule which the probability function can not violate. Putting it simply, if we did indeed know exactly the correct function for n-1 of those arguments, we could use that differential relationship to tell us exactly the appropriate relationship for the missing argument. This is a simple consequence of “self consistency” of the explanation.

That makes sense.

Our shift symmetry can be seen as a simple change in variables where each x has been replaced by a related z where each z has been defined by adding a to the respective x.

z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.

In order to evaluate the sum expressing the total derivative of P with respect to a (the derivative which we deduced earlier must vanish) we need the total derivative of each z with respect to a. But each z is obtained from a by adding a to the appropriate x. This constraint (as a function of a) presumes there is no change in the base x (as it is a shift on all x’s). From this perspective, each z can be see as a constant x plus a; it follows that dx/da vanishs (x is not a function of a) and da/da is identically one by definition.

Doh! Of course!

Exactly right except for one thing. We haven’t proven dP/da = zero here; what we have done is shown how that result (as you say “established earlier”) is totally equivalent to the assertion that the sum over all partials with respect to each argument must vanish.

We first proved that we could see any specific explanation of our “what is”, is “what is”[/color] table as a mathematical function which would yield the probability of seeing a specific entry in that table. Then we argued that shift symmetry required that the total derivative with respect to that shift to vanish. Now I have shown that that requirement is totally equivalent to requiring a specifically defined sum of partial derivatives of that probability function, with respect to those numerical labels (numerical labels which are defined by that explanation), to vanish.

This says that every ontological element (valid or invalid) associated with “that explanation” has associated with it, another thing (a consequence of symbolic shift symmetry). If we have the function for the probability relationships and the numerical labels, we can deduce a proper label (numerical label) to be assigned to that ontological element. What is interesting is the fact that the sum over all those “deduced proper labels” must be zero. We are talking about here is a conserved quantity; the sum over all of them is unchanging though the individual quantities associated with each ontological element might very well change.

Right, okay. I can now understand what you are saying with the math above, albeit somewhat superficially, but nevertheless...

-Anssi
 
  • #468
Thank you Anssi. This is the first time I have ever gotten anyone (other than Paul Martin, who is a personal friend) this far along in my arguments. Everyone else drops out long before we get to this point. We only have a small number of steps to complete my deduction. Remember post number 426 on this thread? It was there that I pointed out that there had to exist a set of invalid ontological elements which would guarantee that a function existed who's roots would yield that exactly that "what is", is "what is"[/color] table.
Doctordick said:
This means that the missing index can be seen as is a function of the other indices. Again, we may not know what that function is but we do know that the function must agree with our table. What this says is that there exists a mathematical function which will yield

(x,\tau)_n(t) = f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t)

It follows that the function F defined by

F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n) = (x(t),\tau(t))_n - f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t) = 0

is a statement of the general constraint which guarantees that the entries conform to the given table. That is to say, this procedure yields a result which guarantees that there exists a mathematical function, the roots of which are exactly the entries to our "what is", is "what is"[/color] table. Clearly, it would be nice to know the structure of that function.
What is somewhat more important is the fact that I have proved that such a function exists and that one achieves that function through the addition of “invalid ontological elements”. What you need to remember is that these “invalid ontological elements” are invalid, not because the yield incorrect answers regarding the information to be explained but rather because they are not actually among the ontological elements which constitute the information our explanation is to explain. They are instead, total figments of our imagination. That is to say that they are inventions; inventions created to provide us with the ability to say what can and can not be under the presumed rule our explanation implements (i.e., the rule being that F=0): i.e., they are ontological elements our explanation presumes exist. If our explanation is indeed flaw free, it will be totally consistent with the existence of these invalid ontological elements.

What is really profound about this realization is the fact that it implies there exists a fundamental duality: the rule and what is presumed to exist are exchangeable concepts. That is to say, what the rule has to be is a function of what is presumed to exist: it is possible to exchange one for the other so long as one maintains some complex internal relationships. It turns out this is exactly the freedom which allows us construct a world view consistent with what we know; without this freedom the problem of “explaining the universe” could not be accomplished. Another way to state the circumstance is to point out that the “explanation of reality” is actually a rather complex data compression mechanism. One's best bet for the future is very simply: one's best expectations are given by how much the surrounding circumstances resemble something already experienced.

But let's get back to this F=0 rule. There exists a rather simple function which can totally fulfill the need required here. That function is the Dirac delta function (google “Dirac delta function” for a good run down on its properties). The Dirac delta function is usually written as \delta(x) and is defined to be exactly zero so long as x is not equal to zero; however, it also satisfies the relationship:

\int_{-\infty}^{+\infty}\delta(x)dx= 1.

Clearly, since it is exactly zero everywhere except when x=0, it must be positive infinity at x=0. It is that property which makes it so valuable as a universal F=0 function. First, it is a very simple function and is quite well defined and well understood. Second, as it is only positive, the sum indicated below will be infinite if any two labels are identical (have exactly the same x, tau numerical label).

\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0,

It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. Now that sounds like an insane suggestion; however, it's really not as insane as it sounds and it ends up yielding an extremely valuable representation which I will show to you in my next post (after I have read your response to this post).

Sorry I was so slow to respond but I needed time to decide exactly how I was going to present this last step as it clearly seems like an rather extreme move to make even if it is true.

Have fun -- Dick
 
  • #469
Doctordick said:
Thank you Anssi. This is the first time I have ever gotten anyone (other than Paul Martin, who is a personal friend) this far along in my arguments. Everyone else drops out long before we get to this point. We only have a small number of steps to complete my deduction. Remember post number 426 on this thread? It was there that I pointed out that there had to exist a set of invalid ontological elements which would guarantee that a function existed who's roots would yield that exactly that "what is", is "what is"[/color] table.
What is somewhat more important is the fact that I have proved that such a function exists and that one achieves that function through the addition of “invalid ontological elements”. What you need to remember is that these “invalid ontological elements” are invalid, not because the yield incorrect answers regarding the information to be explained but rather because they are not actually among the ontological elements which constitute the information our explanation is to explain. They are instead, total figments of our imagination. That is to say that they are inventions; inventions created to provide us with the ability to say what can and can not be under the presumed rule our explanation implements (i.e., the rule being that F=0): i.e., they are ontological elements our explanation presumes exist. If our explanation is indeed flaw free, it will be totally consistent with the existence of these invalid ontological elements.

What is really profound about this realization is the fact that it implies there exists a fundamental duality: the rule and what is presumed to exist are exchangeable concepts. That is to say, what the rule has to be is a function of what is presumed to exist: it is possible to exchange one for the other so long as one maintains some complex internal relationships. It turns out this is exactly the freedom which allows us construct a world view consistent with what we know; without this freedom the problem of “explaining the universe” could not be accomplished.

Yeah this makes perfect sense to me. It sounds like its essentially the same issue as what I called "fallacy of identity". I guess it's interesting that I approached this issue by thinking about how do we go about understanding anything about reality. We need to classify reality into things and assign properties to them, in order to understad "this is a tennis ball and this is how it behaves". And indeed it appears we do that just for the purpose of being able to predict the future, and it does not entail a fundamental identity to the tennis ball; what we tack identity on and what properties those things are ought to have are intimately married, and one can always change the other if the other is also changed accordingly.

This certainly becomes especially important when we start discussing "fundamental particles", which don't appear so fundamental after all.

Another way to state the circumstance is to point out that the “explanation of reality” is actually a rather complex data compression mechanism. One's best bet for the future is very simply: one's best expectations are given by how much the surrounding circumstances resemble something already experienced.

Yeah, we have to discuss your ideas about practical AI at some point.

But let's get back to this F=0 rule. There exists a rather simple function which can totally fulfill the need required here. That function is the Dirac delta function (google “Dirac delta function” for a good run down on its properties). The Dirac delta function is usually written as \delta(x) and is defined to be exactly zero so long as x is not equal to zero; however, it also satisfies the relationship:

\int_{-\infty}^{+\infty}\delta(x)dx= 1.

Clearly, since it is exactly zero everywhere except when x=0, it must be positive infinity at x=0. It is that property which makes it so valuable as a universal F=0 function. First, it is a very simple function and is quite well defined and well understood. Second, as it is only positive, the sum indicated below will be infinite if any two labels are identical (have exactly the same x, tau numerical label).

\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0,

It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. Now that sounds like an insane suggestion; however, it's really not as insane as it sounds and it ends up yielding an extremely valuable representation which I will show to you in my next post (after I have read your response to this post).

That sounds insane alright! Let's see what you have in mind...

Sorry I was so slow to respond but I needed time to decide exactly how I was going to present this last step as it clearly seems like an rather extreme move to make even if it is true.

Good thing I'm not the only slow one here :)

-Anssi
 
  • #470
ya really time isn't a thing to argue, since we just invented it to keep track of things. i mean, time isn' anything, but a measurement. like saying, are centimeters real...no, what kind of question is tht, they are just a handy tool.
 
  • #471
AnssiH said:
Yeah, we have to discuss your ideas about practical AI at some point.
Well, since it is pretty well based on what I am showing you right now, I think it will have to be put off until you understand the essence of this presentation.
AnssiH said:
That sounds insane alright!
As I said, it's really not as insane as it sounds. Stop and think about vacuum polarization: i.e., the problems with conceiving of the vacuum as “absolutely empty” thing, impossible to interact with. The existence of a “pure” vacuum in the sense originally put forth by scientists seems very much to be in conflict with modern physics; if there is no such thing as an “empty spot” doesn't that imply every location is full of something? I only make that comment to point out that one cannot count the idea as insane if one has any faith in modern science. However, note that I use it as a collection of “invalid ontological elements” because of its ability to yield all possible observed results, not because modern science has come to the conclusion that it is correct (I like deduction, not induction). (By the way, that “observed result” would be any possible collection of ontological elements we need to explain: i.e., it's a very powerful tool.)

Well Anssi, you've gotten a long way since we started. At this point, I think we have enough to lay out what I call my “fundamental equation”. The central issue being that all explanations can be seen as mathematical functions of arbitrary labels assigned to those “noumenons" which stand behind those explanations. What I am going to show is that all the constraints I have deduced to be necessary can be expressed in a single equation and that all flawfree explanations must satisfy that equation.

Let me first review exactly what we now have to work with at this point. First, we have the fact that all explanations of anything can be seen as a mathematical function: the probability of a particular set of ontological elements (which is a number bounded by zero and one) is a function of the set of ontological elements being referred to and the time (as defined earlier) which can be represented by a set of numerical labels.

Probability = P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)

We now understand that the ignorance with regard to what is a correct zero reference for that display of numerical labels (shift symmetry) requires the following equations to be valid.
Doctordick said:
This same argument can be applied to the other independent arguments of P, yielding, in place of the differential expressions in post 462, the following three differential constraints.

\sum_{i=0}^{i=n}\frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

\sum_{i=0}^{i=n}\frac{\partial}{\partial \tau_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

and

\frac{\partial}{\partial t}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

which has utterly no mention of the shift parameter a.
I further showed how viewing that probability as a square of some function (the vector dot product) provided a valuable consequence: i.e., I introduced a mechanism for guaranteeing that the constraints embodied in the concept of probability need no longer be extraneous[/color] constraints. Under my representation, they are instead embodied in the representation without constraining the remaining possibilities in any way! This is the central issue behind the representation

P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}^{\dagger}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\cdot\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)dV

Note that the "\dagger” is there solely to bring the representation closer to the common Schrödinger representation of quantum mechanics: i.e., allowing the components of that indicated vector to be “complex” is essentially adding nothing which could not just as easily be represented by twice as many “real” components in the vector nature of \vec{\Psi}. The fact that the number of components must be even is of no account at all when seen from the perspective of the availability of invalid ontological elements (if that really needs clarification, I will clarify it). It turns out to be no more than a convenience which brings mathematical relationships already worked out in detail to bear directly on the problem we need to solve. At issue is expressing the constraints on the mathematical function \vec{\Psi} instead of dealing with the additional constraints were we to work with the probability function itself. It is straight forward calculus to that the constraint,

\sum_i\frac{\partial}{\partial x_i}P \equiv \sum_i\frac{\partial}{\partial x_i}\vec{\Psi}^\dagger \cdot \vec{\Psi}=0,

is exactly equivalent to the constraint,

\sum_i\frac{\partial}{\partial x_i}\vec{\Psi}=i \kappa \vec{\Psi}.

By adding a very simple relationship to the above constraints (and adding some trivial notation), it turns out that we can write a single equation which expresses exactly the constraints so far discussed. The simple relationship involves defining a set of anti commuting entities quite analogous to Pauli spinners (google Pauli if you want to know through what path these entities came to be conceived). What is important is the issue of anti-commutation and the possibility of consistent definition of such a thing yields some powerful mathematical operations. The commutation rule of ordinary mathematics says that, under multiplication, ab = ba. In a discussion of “anti-commutation”, one generally defines the following notation: [a,b] stands for the operation (ab + ba). Using that notation, we can define the following anti-commutating entities:

[\alpha_{ix},\alpha_{jx}]\equiv \alpha_{ix}\alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}

[\alpha_{i\tau},\alpha_{j\tau}]\equiv \alpha_{i\tau}\alpha_{j\tau} + \alpha_{j\tau}\alpha_{i\tau} = \delta_{ij}

[\beta_{ij},\beta_{kl}]\equiv \beta_{ij}\beta_{kl} + \beta_{kl}\beta_{ij} = \delta_{ik}\delta{jl}

[\alpha_{ix},\beta_{kl}] = [\alpha_{i\tau},\beta_{kl}] = 0

where \delta_{ij} is zero if i is different from j and one if i=j.

Finally, introducing the common vector notation that

\vec{\alpha}_i = \alpha_{ix}\hat{x}+\alpha_{i \tau}\hat{\tau}

and

\vec{\nabla}_i = \frac{\partial}{\partial x_i}\hat{x} + \frac{\partial}{\partial \tau_i}\hat{\tau}

one may write all the constraints we have discussed in a very simple form.

If one sets the additional constraint on the universe (i.e., if the solution covers the entire universe),

\sum_i \vec{\alpha}_i \equiv \sum_{ij}\beta_{ij} \equiv 0

then all solutions to the following equation will exactly satisfy the differential constraints we have deduced to be necessary to our mathematical representation of any explanation and, secondly, every mathematical function which satisfies the constraints we have deduced can be mapped directly into a solution to that equation. Thus it is that following equation embodies the most fundamental constraints on any mathematical expression of any explanation of anything. That is, we may state unequivocally that it is absolutely necessary that any algorithm which is capable of yielding the correct probability for observing any given pattern of data in any conceivable universe must obey the following relation:

\left{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j} \beta_{ij} \delta(\vec{x}_i - \vec{x}_j) \right}\vec{\Psi}= K \frac{\partial}{\partial t}\vec{\Psi} = iKm \vec{\Psi}

where the vector x sub i specifies the x tau label of the ith ontological element the explanation presumes to exist at the time t.

Absolutely do not worry about solving that equation, it is not a trivial endeavor. I had deduced the fact that that equation had to be valid when I was a mere graduate student. At the time, I felt very strongly that a solution would be a valuable thing to find but, for something like ten years, I had not managed to drag out a single solution. In the late seventies, I saw a viable attack and solutions have been rolling out ever since. I can now show that ninety percent of modern physics is no more than an approximation to solutions to that equation and I suspect it is not one hundred percent merely because modern physics contains some subtle errors not yet recognized by the authorities.
AnssiH said:
Good thing I'm not the only slow one here :)
Everybody is slow when they are not sure what should be done.

At this point, there are three paths open to us. One, we could spend some time discussing anything underlying my deduction which seems shaky to you; two, I could show the details of those solutions I spoke of; or three, we could talk about the philosophical implications of my discovery. Personally, I would like the third; however, that path would require a certain acceptance of my assertions that the second is an accurate representation of the facts. The problem with actually pursuing the second is it is not at all trivial and requires a good understanding of mathematics (it could take a good length of time, particularly for someone unfamiliar with partial differential equations of many variables). That would be a kind of comprehension seldom found in professionals trained and indoctrinated in common plug and play physics typically found in the field. I leave the decision up to you but I think it should be on a new thread. If you would start such a thread, I would be happy to post to it. Hopefully there are others who are following us though don't be surprised if there aren't.

It's been a lot of fun and I think my presentations are much improved over what I did years ago. Thank you for your attention.

Have fun -- Dick

PS Thank you to whoever fixed the LaTex implementation. Being able to edit the LaTex in the preview saves a lot of time.
 
  • #472
Dr. Dick,
I have a question. Looking at this part of your final equation:
K \frac{\partial}{\partial t}\vec{\Psi} = iKm \vec{\Psi}

Do you see any application to the thinking of David Bohm--that is, a type of fundamental duality to reality where:
K \frac{\partial}{\partial t}\vec{\Psi}
represents the "explicate order" of Bohm (e.g., the universe as we see it)
while:
iKm \vec{\Psi}
represents the "implicate order" of Bohm, (e.g., the veiled underlying order that governs the universe) ?
 
  • #473
Rade said:
...represents the "implicate order" of Bohm, (e.g., the veiled underlying order that governs the universe) ?
Again you make it quite clear that you did not follow my presentation. My equation says absolutely nothing about reality. It speaks entirely to the problem of interpreting reality. My source data is taken to be explicitly uncorrelated in any manner (the ”what is”, is “what is”[/color] information table). What I show is that absolutely any flaw-free explanation of anything can, through the presumption of implied ontological elements (and there are presumptions made unconsciously in any attempt to understand anything), can always be interpreted in a manner such as it will obey my fundamental equation.

It follows that, “obeying that equation” is a consequence of internal consistency of that explanation and absolutely nothing else. It is, by construction, a tautology and the, fact that all modern physics appears to be no more than a collection of solutions to that equation, implies that modern physics is itself a very complex tautology in exactly the same sense that the old religious explanations of reality (the gods did it) was a tautological explanation of reality.

Prior to Newton, everyone worked on those “celestial spheres” which controlled the motions of heavenly bodies. After all, if they didn't exist, the moon would just fall to the ground (something has to be keeping it up there). Newton was the first man to examine exactly what it would look like if there were nothing holding the moon up there – low and behold - he discovered that it would look just like it does “the moon is just continually falling around the earth”. What I have done is shown something quite analogous to his discovery of gravity only what I have done is applicable to the whole of scientific investigation.

By the way, I think it would be quite worthwhile to show students how Newton's examination of a falling moon walks one right into his theory of gravity. If anyone expresses an interest, I will lay it out for them.

Have fun -- Dick
 
  • #474
Doctordick said:
My equation says absolutely nothing about reality. It speaks entirely to the problem of interpreting reality

Good gravy--do you not see the contradiction of your words. You cannot on the one hand say that your equation "says nothing about reality" (absolutely even you say), and then on the other hand claim "it speaks to interpreting reality". Well good Dr. when you say you "interprete reality" you most clearly do say "some"thing" about reality.

I am very sorry I tried a civil attempt at communication with you, it is clear you have absolutely no idea what I was asking in my question about Bohm.
 
  • #475
Rade said:
Good gravy--do you not see the contradiction of your words. You cannot on the one hand say that your equation "says nothing about reality" (absolutely even you say), and then on the other hand claim "it speaks to interpreting reality". Well good Dr. when you say you "interprete reality" you most clearly do say "some"thing" about reality.

I am very sorry I tried a civil attempt at communication with you, it is clear you have absolutely no idea what I was asking in my question about Bohm.
I am sorry I have upset you; that was not my intention. You simply have no idea of the difference between an explanation and the constraints on such; they are actually rather different concepts.

Have fun -- Dick
 
  • #476
Hello, finally have had time to concentrate on your post properly. Actually started yesterday but I've just been going back to the older posts to get a better grasp of this.

Doctordick said:
At this point, there are three paths open to us. One, we could spend some time discussing anything underlying my deduction which seems shaky to you; two, I could show the details of those solutions I spoke of; or three, we could talk about the philosophical implications of my discovery.

We need to stick with option #1 for a while. Although, it could be beneficial to hear about your philosophical interpretation because that ought to be closer to my mode of thinking, and so it could help me in grasping some of the mathematical details.

Anyway, reading the old posts carefully again, I found an answer to many things I was wondering by now, but there were still few things that I couldn't figure out for sure.

Actually, let me get back to that older quote about recovering missing indices. I don't know if the answers are supposed to be obvious to me but they are not :) Hopefully you can pick up what am I missing.

Doctordick said:
This means that the missing index can be seen as is a function of the other indices. Again, we may not know what that function is but we do know that the function must agree with our table. What this says is that there exists a mathematical function which will yield

(x,\tau)_n(t) = f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t)

I.e. when we are missing just one entry from some specific B, there is a function that will tell us what that missing entry is.

A partially filled "what is, is what is"-table must be part of that function, right? Just one B alone cannot be enough data to tell us what some missing index is supposed to be?

Is this valid only when there is only 1 missing index, or is it valid for larger number of missing indices?

It follows that the function F defined by

F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n) = (x(t),\tau(t))_n - f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t) = 0​

is a statement of the general constraint which guarantees that the entries conform to the given table. That is to say, this procedure yields a result which guarantees that there exists a mathematical function, the roots of which are exactly the entries to our "what is", is "what is" table. Clearly, it would be nice to know the structure of that function.

I took it on faith that the above expression "guarantees that there exists a mathematical function, the roots of which are exactly the entires...", but I don't fully grasp what that expression says. There is a function F, whose input is some set of x & tau indices. Of a specific B? I don't understand why is it equal to (x(t),\tau(t))_n - f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t)

The part that I thought I understood is that it would be possible to recover one missing index from a specific B, if we had a function that gave "0" with the input of the correct (full) set of indices of that B. So we could just test which index gave a 0. That was the idea with this?

About the use of Dirac delta function here;
Clearly, since it is exactly zero everywhere except when x=0, it must be positive infinity at x=0. It is that property which makes it so valuable as a universal F=0 function. First, it is a very simple function and is quite well defined and well understood. Second, as it is only positive, the sum indicated below will be infinite if any two labels are identical (have exactly the same x, tau numerical label).

\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0,​

It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated.

I suppose the expression essentially means we take a specific B, and its every X is compared with every other X and every tau is compared with every other tau. So that we'll see if any of them are the same. Or in other words, we are simply labeling every entry as unique? I am missing, why do we need a dirac delta function to make every single entry unique? This point has rather something to do about a general role of something like Dirac delta function?

Hmm, I definitely think your philosophical interpretation would make it easier for me to see what is truly essential about the mathematical expressions. For example...

As I said, it's really not as insane as it sounds. Stop and think about vacuum polarization: i.e., the problems with conceiving of the vacuum as “absolutely empty” thing, impossible to interact with. The existence of a “pure” vacuum in the sense originally put forth by scientists seems very much to be in conflict with modern physics; if there is no such thing as an “empty spot” doesn't that imply every location is full of something?

...that makes a perfect sense to me.

Well, it's getting late again and I need to get to the rest of the post (that "fundamental equation") more sometime soon. But in the meantime:

I further showed how viewing that probability as a square of some function (the vector dot product) provided a valuable consequence: i.e., I introduced a mechanism for guaranteeing that the constraints embodied in the concept of probability need no longer be extraneous[/color] constraints. Under my representation, they are instead embodied in the representation without constraining the remaining possibilities in any way! This is the central issue behind the representation

P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}^{\dagger}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\cdot\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)dV

Note that the "\dagger” is there solely to bring the representation closer to the common Schrödinger representation of quantum mechanics: i.e., allowing the components of that indicated vector to be “complex” is essentially adding nothing which could not just as easily be represented by twice as many “real” components in the vector nature of \vec{\Psi}. The fact that the number of components must be even is of no account at all when seen from the perspective of the availability of invalid ontological elements (if that really needs clarification, I will clarify it).

Yeah I think some things need clarification at least. I don't know what the \dagger means. I am not familiar with Schrödinger representation (as I am not familiar with mathematical representation of much of anything :)

Does \Psi symbol mean simply any function? (whose results we will take as the components of a vector)

I may have forgotten something but, why does the number of components have to be even?

Couldn't figure out what does dV mean either.

I should be faster to reply for a while, although couple weeks from now I'll be away for a week again due to visiting San Diego. Thank you for your patience :)

-Anssi
 
  • #477
Hi Anssi, it's nice to have you back. I sure have missed your posts. Your knowledge of mathematics may be limited but that can change; your mind is like a breath of fresh air. I used to put down a signature quote: “Knowledge is Power; but all power can be abused. The most popular abuse of the power of knowledge is to use it to hide stupidity.” It does not apply to you. Education can be a stupefying experience and it is for many. Young minds are so often overwhelmed by their own ignorance that they begin to “believe” their professors and education turns into faith. One must have faith in their own ability to think and always maintain a doubt of authority. I think you have kept that doubt.

I got your note Monday but was waiting for this post in order to get a grasp of what you were misunderstanding. As you have already noticed, I posted on that other forum you mentioned, pretty well for naught. People just don't seem to think; I was hoping for a little more than what I got. You are a very rare person in that you have a strong tendency to actually think things out for yourself. I think all you really lack is a good understanding of mathematics but we can cover that (though it may not be a quick thing).

Meanwhile, let's get to your questions in this post: i.e., stick with option #1 until everything is clear (I don't intend for you to take anything on faith as it is all actually quite simple once you actually see what I am doing). We can worry about philosophical interpretation after you understand what I am saying.
AnssiH said:
Actually, let me get back to that older quote about recovering missing indices. I don't know if the answers are supposed to be obvious to me but they are not :) Hopefully you can pick up what am I missing.
I think we need to go back to that post where I first began adding “invalid ontological elements”. The fact that we can add these invalid ontological elements gives us the power to organize or represent that ”what is”, is “what is”[/color] table in a form which allows for easy deduction. In that post, I said I wanted to add three different kinds of “invalid ontological elements”, each to serve a particular purpose. You need to understand exactly why those elements are being added and how the addition achieves the result desired.

The first addition is quite simple. As I said, that ”what is”, is “what is”[/color] table can be seen as a list of numbers for each present which specify (or refer to) exactly what “valid ontological elements” went to make up our past at each defined time (what we know being “the past”). The output of our probability function (which defines what we think we know) is either zero or one depending upon whether a specific number is in that list or not. Viewed as a mathematical function, it is a rather strange function in that the number of arguments (the number of valid ontological elements associated with a given t) can vary all over the place (there is no fundamental constraint on our change in knowledge: i.e., the amount of information in a given “present”). That is somewhat inconvenient (at least from the prospect of the “language” of mathematics) so we add “invalid ontological elements” sufficient to make the number of arguments the same in each and every case defined by a specific t.

What you need to do is comprehend that we are dealing with two rather different issues here. First there is that collection of “valid ontological elements” underlying our world view (you can think of this as a basic, undefined, ”what is”, is “what is”[/color] table in your left hand) and, secondly, there is that epistemological solution which is our world-view itself. That world view (and that would be any explicitly defined explanation) includes the assumption of certain “invalid ontological elements” necessary to that epistemological solution. Thus that “defined” representation must include those “invalid” elements (you can think of this as a second, explicitly defined, ”what is”, is “what is”[/color] table in your right hand). What I am going to do is add some rather arbitrary “invalid ontological elements” to that second table. You should certainly ask, how do I justify these specific additions?

Certainly someone might come up with an explanation which didn't require these, right. The answer is, of course, yes! However when he (or she) goes to explain their explanation, it is my problem to understand that explanation. As they proceed with communicating their explanation I would certainly make some assumptions about what they were trying to tell me. These assumptions are not necessarily true nor need they be part of his actual communications: i.e., they amount to presumed invalid ontological elements on my part. What I am laying out is, I think, some very useful analytic assumptions: i.e., “invalid ontological elements” which make that communication understandable to me. I have to build a world-view in my own head and that world view has to be logical coherent, I can not do that without making assumptions.

Just as an aside, from a philosophical perspective, that first addition (making the number of ontological elements the same for all B(t)) is essentially presuming these valid ontological elements exist even when we are not directly dealing with them. That is to say, the ordinary concept of “ontological elements” behind that epistemological construct is that they exist in the past, the present and the future. No one presumes they come and go (actually, there is a subtle point there which comes up in the solution possibilities with regard to explicitly invalid ontological elements, but that will come up later). Basically, I presume you understand the advantage of this first addition.

The second addition of invalid ontological elements was to make sure that “t” (the “time” index) could be extracted from the ”what is”, is “what is”[/color] table so that it could be a viable parameter usable in an explanation. That was done in the following manner. Anytime there existed two or more identical presents (in that specifically defined ”what is”, is “what is”[/color] table in your right hand), invalid ontological elements were added and given references sufficiently different to make those presents different. At the time you expressed understanding of that procedure.

This step can be justified from a philosophical perspective. How could one present a world view where temporal behavior of entities was explained without being able to define clocks or calendars? That is, those clocks and calendars need to be part of that underlying ontology.

What that second step also provided was a method of defining a specific index via addition of invalid ontological elements. What was important was that the augmented ”what is”, is “what is”[/color] table in yielding a different present for every t allowed us to recover t if we were given a specific present (i.e., the specific entries going to make up that B(t)). Given that set of ontological elements, how do we recover t? Very simply; we look at the augmented ”what is”, is “what is”[/color] table and find the specific entry. There can only be one such entry and that entry will include the t index we wish to know. Thus it is that we can say that t is a function of the elements going to make up B(t).

That brings us to the third addition of “invalid ontological elements”. The mechanism just described for establishing a unique t index can just as easily be used to establish a specific reference index within that B(t). All one need do is remove (or ignore) a specific elemental index in that ”what is”, is “what is”[/color] table and jot down all the remaining elements. Now examine the entire ”what is”, is “what is”[/color] table and determine if the set which was jotted down appears anywhere else in the table: i.e., exists in any other present when a single element is removed. In any case where these references appear a second time, one can add invalid ontological elements with different reference indices such that the the augmented table will not contain that duplication.

Just as occurred with the t index, if I am given all but one of the reference indexes in a present, I can recover the correct index for the missing element. Again, the process is very simple: we look at the newly augmented table and find the specific entry which has that collection of elements and read off the missing element. The augmentation process can be continued until any index can be so recovered if the entire collection of remaining indices are known. This is exactly the same mechanism which made the t index recoverable.

From that rather extensive augmented ”what is”, is “what is”[/color] table, I can always recover any missing index. Since “a function” is a method of obtaining a result from specific information, this proves that “a function” exists. (In actual fact; since, in the final analysis, this amounts to a fitting problem to a finite set of points; there exists an infinite number of mathematical functions which will serve the purpose of recovery.) What I have just proved is that it is always possible to conceive of “invalid ontological elements” such that the function “f” exists where

\vec{(x,\tau)}_n= x_n\hat{x}+\tau_n\hat{\tau} = \vec{f}((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_{n-1})

Notice that, this time, I have shown f as a vector function (it's result is a vector pointing to a point in the x, tau space which constitutes the missing point, (x,\tau)_n.

You should understand that, if two things are equal, their difference is zero. Certainly, if that is the case, then one can define the function “F” to be exactly the difference between the point representing the missing index and the result of the vector function which yields that point,

F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n})= \vec{(x,\tau)}_n - \vec{f}((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_{n-1})\equiv 0.

where the x, tau arguments are the relevant numerical references in the ”what is”, is “what is”[/color] table. (Sorry about being sloppy with my notation earlier regarding the vector picture.)

Notice that I have removed the “t” which was in the earlier representation. (It really shouldn't have been there.) If you examine the argument above carefully, it should be evident that any there need not be any dependence on t: i.e., it is possible to add enough invalid ontological reference indices such that no repeat exists anywhere in the table.
AnssiH said:
Is this valid only when there is only 1 missing index, or is it valid for larger number of missing indices?
One could continue the process of adding “invalid ontological elements” in order to define a function which would yield two missing indices but I see no purpose to such an extension. My purpose was to prove that one could always achieve a circumstance (by adding invalid ontological elements) such that the rule which determined what reference numbers existed in the ”what is”, is “what is”[/color] table consisted of “those entries are the roots of the function F”: i.e., the rule can be written as

F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n})= 0,

a rather simple expression as rules go! Note that the rule is not a function of t; a seriously important fact. (I apologize again for my earlier oversight.) Philosophically speaking, this is nice as it means that the rule does not change from day to day; a rather significant fact.
AnssiH said:
I took it on faith that the above expression "guarantees that there exists a mathematical function, the roots of which are exactly the entires...", but I don't fully grasp what that expression says.
It says that the only acceptable reference numbers for the ”what is”, is “what is”[/color] table are roots of some function “F”. Or rather, that there always exists a collection of “invalid ontological elements” such that the rule as to what reference numbers can be seen in that table are given by the solutions to some equation expressed in the form F=0.
AnssiH said:
The part that I thought I understood is that it would be possible to recover one missing index from a specific B, if we had a function that gave "0" with the input of the correct (full) set of indices of that B. So we could just test which index gave a 0. That was the idea with this?
In a sense you are right; but the issue is not really to test the function F as we do not have it. Before you can actually have that function, you have to have the solution to the problem. That is F can not be defined until the epistemological construct which explains that ”what is”, is “what is”[/color] table is known (it is that explanation which specifies those numerical references). What is important here is that, if I am given a set of “valid ontological elements” there always exists a set of “invalid ontological elements” which together with a rule F=0 will yield exactly those “valid ontological elements” (along with those presumed “invalid ontological elements”). That is, it is always possible to construct a flaw-free epistemological construct where the only rule is “F=0” and the entire problem is reduced to “what exists”. This is a much simpler problem than being confronted with two apparently different issues to solve: “What exists?” and “What are the rules?”.
AnssiH said:
I suppose the expression essentially means we take a specific B, and its every X is compared with every other X and every tau is compared with every other tau. So that we'll see if any of them are the same. Or in other words, we are simply labeling every entry as unique?
You appear to understand what I am saying; however, it is possible that you are stepping off trying to construct a epistemological solution which conforms to the circumstance I have laid out. That, you shouldn't be trying to do. Remember, what I have laid out must be capable of representing all possible epistemological constructs. That is a pretty extensive field and it would be a mistake to presume that simple answers exist. I have proved that the procedure I described could be accomplished in principal since the number of elements being referred is finite; however, their number could easily exceed any mechanical equipment we might envisage to carry out such a procedure. I certainly have not proved any such thing could actually be done in one's life time; even with the simplest problem. All I have shown is that the process can be done “in principal”.
AnssiH said:
I am missing, why do we need a dirac delta function to make every single entry unique?
First of all, the Dirac delta function does not make every single entry unique, all it does is yield an infinite result when any two are the same. It should be clear that, if there exists a finite set of “invalid ontological elements” which will make the rule “F=0” yield both the “valid ontological elements and those we added (providing us with that flaw-free epistemological solution), we can certainly add a bunch more without bothering that solution. All we need do is recognize them as “presumed” and not necessarily part of that valid ”what is”, is “what is”[/color] table.
Doctordick said:
It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated.
That seems to me to be a pretty straight forward issue. The only real problem is that the number of references has now gone to infinity and we can no longer argue things from a “finite” perspective. That introduces some subtle problems which require additional mathematics to handle. Other than that, I think my statement is rather incontrovertible.

Apparently I have exceeded the allowed size of a post and the system will not accept it. I will continue with a second post.

Sorry about that -- Dick
 
  • #478
Part II, answer to Anssi.

Back again! This is a continuation of the post above.
AnssiH said:
Yeah I think some things need clarification at least. I don't know what the \dagger means. I am not familiar with Schrödinger representation (as I am not familiar with mathematical representation of much of anything :)
Let me start with the relationship between Psi and our probability. The issue is the fact that probability is defined to be bounded by zero and one. As a function, that makes P a rather special function. Note that, in my presentation, I don't want to make any limitations on the possibilities at all.[/color] It follows that I need to work with a totally unconstrained function: i.e., the solution to our problem must be left to be ANYTHING[/color]. Now, “any[/color] mathematical function” is a pretty obvious entity: it's arguments are a collection of numbers and it's output is a collection of numbers. A “mathematical function” is a method from getting from the first to the second, “PERIOD”[/color], no other constraints! If we are to include all possibilities, that is about all we can say about the solution to our problem, the possible epistemological construct.

What I am pointing out with my definition of P here is that absolutely any[/color] function can be converted into a form which can be seen as a probability. It can be converted into a positive definite number by squaring all those output values and adding them up. It can then be made to be bounded by zero and one by dividing it by a number equal to the sum of all possible outcomes. (There are some subtleties here related to problems with infinity which I will discuss if you wish; however, for the moment, let's just say that the required division is always possible if it is needed.) The standard mathematical notation for the act of squaring those output values and adding them up is to represent the output of the function as an n-dimensional vector. In that case, performing a dot product of that vector with itself constitutes exactly the process of squaring all the components (the output values) and adding them up i.e., \vec{\Psi}\cdot\vec{\Psi}.

The “dagger” has to do with a thing called the “complex conjugate”. Apparently, from the posts I have seen and the comments I have gotten from modern physicists, no one uses Erwin Schrödinger's original notation any more. (In 1926, Dirac showed that Heisenberg's “matrix mechanics” and Schrödinger's “wave mechanics” were mathematically equivalent and introduced a new “bra-ket” notation which seems to be the standard now.) I prefer Schrödinger's original notation as it can be directly derived from my attack. (The issue of notation is little more than a mathematical formality though different notation does bring different issues to the forefront.) What you should take note of is the fact that modern quantum mechanics, as seen by the academy (the religious authority of modern physics) is not derived from fundamental concepts; but is rather put forth in axiomatic form and that derivation of the relationships from more fundamental analysis is really of no interest to them.

In Schrödinger's equation, Psi is taken to be a vector with complex components (if “i” is the square root of minus one then an arbitrary complex number can be written as a+bi). If the components of the vector Psi are complex, then the simple squaring does not yield a positive definite number: (a+bi)(a+bi)= a(a+bi) +bi(a+bi) = aa+abi+bia+bibi = aa-bb+2abi which just isn't positive definite. Instead, it is necessary to define what used to be called the complex conjugate: (a+bi)^\dagger = (a-bi). Then (a+bi)^\dagger (a+bi) = (a-bi)(a+bi) = aa +bb. So all the dagger means is that the result is to be transformed to its complex conjugate; each and every result of applying the function Psi to its arguments (every component of that abstract vector) is changed to its complex conjugate. This is simply a method of guaranteeing that the probability calculation represented by

P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}^{\dagger}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\cdot\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)dV

is a positive definite quantity.

I laid all that out because I want to work with the Psi function. If the probability function exists (and it certainly does if our epistemological construct will yield expectations for those collections of ontological elements B(t)) then so does Psi (worst case scenario, Psi is just the square root of P). What I want to do is examine the possibilities for Psi. With the “invalid ontological elements” I introduced to make that sum over Dirac's delta function become the F function I needed, I know that, whenever I have the correct set of numerical references to my ontological elements,

\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0,

If I don't, then that sum is infinite! Against this, I also know that, if I have an incorrect set,

\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0

as the probability of seeing that particular set of references must be zero and the probability is the sum of the positive definite squares of the components of Psi (that means that every one of those components must be zero and Psi must totally vanish). This means that no matter what arguments are inserted as numerical references to that collection of ontological elements, the product of those two above must be zero (if one isn't zero, the other is). It follows that

\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t) = 0,

without exception.
AnssiH said:
I may have forgotten something but, why does the number of components have to be even?
If we go to representing the components of the vector function Psi as complex numbers, it is completely equivalent to using two components for each normally real component so, in a sense we are limiting our consideration to functions with a even number of components. This isn't really troublesome as, if the correct answer turns out to be a function with an odd number of components, it can just as well be seen as a function of an even number where one of the components is always zero. All this move really does is make the notation appear to be similar to Schrödinger's.

As far as the dV is concerned, when I introduced the idea of adding an infinite number of ontological elements, I brought the total of all possibilities to an infinite number of combinations. That pretty well assures us that the probability for any single collection will be zero. Essentially that tells us that we are dealing with probability density here and not directly with probability itself. Another way to look at it is to understand that the sums over all possibilities (in order to determine the factor we need to divide by) has now transformed into an integral over a continuous variable. The probability then depends upon how large a region of that continuous variable we are considering.

dV = dx_1d\tau_1dx_2d\tau_2\cdots dx_nd\tau_n \cdots

Sometimes the notation gets complex; we are talking about a lot of variables here; remember, this solution represented by Psi explains everything about the entire universe.

Back to the issues of philosophy here. Several come to mind. First I would like to go back to your comment about recovering a specific index from that ”what is”, is “what is”[/color] table given that you know all the indices except one. In the set up I have described, “the rule” allows one to recover that index if all the other indices are known. What does that amount to? Given a world view consistent with the flaw-free epistemological solution we are looking at, it says that, if we know the entire rest of the universe in detail for all times under consideration, the rule will tell us exactly what that reference number must be for the missing index as a function of time.

This is surprisingly similar to a common presumption of modern science. Take a careful look at exactly what modern science says about the outcome of an experiment. In essence they hold that, if we know the entire description of an experiment (all significant details: i.e., ignoring what is insignificant) and the rules governing the universe, we know what the result of the experiment will be. They make the assumption that this is a fact whereas I have constructed my representation (via additions of invalid ontological elements) so that the same issue is a fact and not an assumption. The same conclusion is reached but the defense is subtly different.

A second issue also arises here. In establishing my Dirac delta function rule, I pointed out that it was absolutely correct via the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. There is a very important additional consequence of that procedure. Suppose we don't know that a specific answer is wrong (one of those numbers might actually represent a valid ontological element). All that need happen is that we do not add an “invalid ontological element” to cover that case. The consequence of not adding that element is that it leaves openings in that universal cover. What that does is allow the reference to a valid ontological element to have more than one value. Essentially, it introduces uncertainty to the resultant world view.

Now this uncertainty is accepted as a definite component of modern physics but my approach is concerned with “valid ontological elements” something philosophers do not consider to be “variables” subject to uncertainty. Can you really say that their position is defendable? You should note that they give their arguments in an inexact language under the presumption that the words they use have a definite meaning. I personally think that is a rather poor assumption. Words change meanings all the time and, from a historical perspective, they are almost as dynamic as are the molecules that make up our physical environment. Why else would ancient languages differ so much from our own?

What I am saying is that what I am doing has applications far beyond what is currently regarded as physics. It adds a whole multiplicity of dynamic relationships to the study of analytical science. Remember that, prior to the work of Franklyn Ampere, Oersted, Volta, Coulomb, Faraday and others, electricity and magnetism were simply not considered to be analytically accessible phenomena.

I know your mathematics training is limited but you should consider what Feynman once said, “mathematics is the distilled essence of logic”. The problem with conventional logic is that it can only span a few million steps at best and can only be extended to the range needed to understand the universe through abstract mechanisms with powers well beyond what we can hold in our heads; mathematics is absolutely essential to understanding the universe.

Have fun -- Dick
 
  • #479
Dr. Dick,

In response to a comment you made in post # 478 above, I started a thread in quantum theory section of forum, and I see that you will have to provide clarification of your thoughts. I think this a good opportunity for you to interact with professional physicists about your philosophy here presented--see here if you have an interest:

https://www.physicsforums.com/showthread.php?t=178555
 
  • #480
Rade said:
In response to a comment you made in post # 478 above, I started a thread in quantum theory section of forum, and I see that you will have to provide clarification of your thoughts. I think this a good opportunity for you to interact with professional physicists about your philosophy here presented--see here if you have an interest:
I have read the thread and their comments are pretty typical of physicists I have run across in the past. As far as interacting with professional physicists is concerned, I have done plenty of that in my life time. I have earned a Ph.D. in theoretical physics from a reputable university and had plenty of interactions with the academy during that period. At that time (the early sixties) the position of theoretical physicists was that the big problem was not understanding the universe (they already understood it all) the big problem was how to calculate solutions to their equations. As I have said somewhere else, Richard Feynman got a Nobel Prize for developing a notation for keeping track of terms in an expansion of an infinite series (which everyone believed to be correct ). To quote Caltech themselves, http://pr.caltech.edu/events/caltech_nobel/ And I do not intend any insult to Richard in any way. In fact, I talked to him in 86 and he said he would like to follow my thoughts as soon as he finished with that NASA accident (he was the chairman of the investigating committee). Next thing I heard, he had died of cancer (I finally get an intelligent educated person to talk to me and he ups an dies; just my luck).

At any rate, I was not interested in “crunching numbers” (the standard career of a theoretical physicist, at least back then), I was interested in the underlying basis of physics itself. So, I did not publish (I spent my time thinking instead). I had sufficient evidence of the academy's lack of interest in such things long before I got my Ph.D..
jostpuur said:
I'll put it this way: "Physicists are usually not interested in philosophy, they are interested in calculating." That is something that many will probably agree with, and if Doctordick is criticizing it, it is understandable, although I'm not convinced that he himself would be improving anything.
At least he finds my rebellion “understandable” though he clearly does not think my thoughts are worth thinking about.
country boy said:
But every physicist I know is interested in the possibility that QM and other aspects of modern physics might be derivable from more fundamental, as yet unrecognized, principles.
Yeah, sure they are interested; as long as it comes from a recognized authority and not a rebellious skeptic of their great accomplishments.
Hurkyl said:
... you run the risk of losing some of your audience if they have to do a lot of theoretical work before they can actually compute anything.
Yeah, there is a lot of truth to that all right. When it comes to serious thought, most people have an intention span of about two minutes. They want “simple minded” answers to their questions, not simple answers. One should recognize that Newton's theories are quite simple but they are not at all “simple minded”. There is a great difference between “simple " and "simpleminded”.
Llewlyn said:
Please note that all physics is put in axiomatic form.
That is a succinct statement of the academies position on the issue. As I have said many times, physicists say what I am doing is philosophy and they have no interest in it; philosophers say what I am doing is mathematics and they have no interest in it and mathematicians say what I am doing is physics and they have no interest in it. All I am looking for is people who are interested in thinking; a very rare breed indeed.

You comment that I need to provide clarification of my thoughts. I think what you really mean is that I need a simple minded overview. Explaining the entire universe is not a simple minded thing. I have already provided much clarification to Anssi. Tell your friends to start with post #211 on this thread (my first response to Anssi) and then follow the conversation between Anssi and myself. I think they would find my thoughts quite clarified. But I doubt any of them would take the trouble.

Have fun -- Dick
 
  • #481
I didn't follow this incredibly long thread, I just jumped in now. Just reading Doctordick's last post I can relate to what he says but I still don't know what the discussion is about.

Before I even try to read all posts, is the dicussion here about the definition or interpretation of time like the title suggest?

Any suggestions which post in this thread I should start reading to get an idea of Doctordicks idea? I ask because, as is often threads start out as something and ends up as something completely different.

/Fredrik
 
  • #482
  • #483
Whoa.. a lot of reading. Some comments along the way...

From http://home.jam.rr.com/dicksfiles/Explain/Explain.htm

Without going through all details I can directly relate to this

What I am saying is that understanding implies it is possible to predict expectations for information not known; the explanation constitutes a method which provides one with those rational expectations for unknown information consistent with what is known

This sounds very close to the general induction principles of optimal inference. If so, that is very much in line with my own thinking. When I want to understand reality, it basically means that I want to see how my view of things, and my generator of educated guesses are induced from my current knowledge and experience, under the condition that I do not know everything, and I can't know everything. The reason I can't know everything at once is because my memory is too small, and the reason I can't computer everything instantly is because my computer power is too poor. Here comes a relation to time. This is my own thinking... and if Doctordicks ideas is anything close to this I think I'll find it interesting.

How does that relation sound to you DD?

I'll read on when I get more time

/Fredrik
 
Last edited by a moderator:
  • #484
I also associate here to bayesian thinking, but instead of bayesian probability, I'd like to call it bayesian expectation for the very reason that the true probabilities themselves can only be estimated.

/Fredrik
 
  • #485
Ok I am only one the first page yet! but a question to Doctordick, did you read the ideas of Ariel Caticha, based on optimal inference and entropy methods?

For example arXiv.org/abs/physics/0311093
more at http://www.albany.edu/physics/ariel_caticha.htm

A quote from his paper
The procedure we follow differs in one remarkable way from the manner that has in the past been followed in setting up physical theories. Normally one starts by establishing a mathematical formalism, setting up a set of equations, and then one tries to append an interpretation to it. This is a very difficult problem; historically it has affected not only statistics and statistical physics – what is the meaning of probabilities and of entropy – but also quantum theory – what is the meaning of wave functions and amplitudes. The issue of whether the proposed interpretation is unique, or even whether it is allowed, always remains a legitimate objection and a point of controversy.

Here we proceed in the opposite order, we first decide what we are talking about and what we want to accomplish, and only afterwards we design the ap- apropriate mathematical formalism. The advantage is that the issue of meaning never arises.

/Fredrik
 
Last edited by a moderator:
  • #486
I found this page http://home.jam.rr.com/dicksfiles/reality/Contents.htm which I suspect is easier to read that this thread as it looks more structured.

I seems the author tries to rethink from scratch, which is good. I take it the suggestions must be read in the context of his rethinking. I'll start and see if I understand you... some questions along the way on things that I "suspect" are key points to understand the rest(?)...

The Foundations of Physical Reality said:
The issue of truth by definition rests on two very straight forward points:
(1.) we either agree on our definitions or communication is impossible and
(2.) no acceptable definition can contain internal contradictions.

What about the possibility that some definitions, along with other concepts are formed in the communication/interaction itself? And that mutual equilibration is evolving _due to_ communication?

For example, you an I start to speak, by starting out with a small common relation, we can build a larger common relation and set of "definitions"... but isn't that a process?

I'm not sure if I read you wrong here.
Comments?

/Fredrik
 
Last edited by a moderator:
  • #487
The Foundations of Physical Reality said:
Thus, the problem becomes one of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process.

I like your bold stance so far, but sometimes the tone is a bit aggresive towards the supposedly "simple minded", but maybe there is a reason for that :)

I received an image in my head of what you set out to do, to somehow try to find a foolproof starting point and work from there. You also note that

The Foundations of Physical Reality said:
As it is my intention to make no assumptions whatsoever, even the smallest assumption becomes a hole which could possibly sink the whole structure. As I do not claim perfection, errors certainly exist within this treatise. None the less, I claim the attack will be shown to be extremely powerful.

It thin this is a key point, that I suspect I'll relate back to later on. In my thinking stability and flexibility is what I consider to be a factor or survival. A strategy that basically is "if I am right, I'll rule the world, and if I'm wrong I'll die" sounds like a high risk strategy. It will be interesting to see how risk assessment is further handled.

In my thinking, the key goal is not some ultimate perfection, but optimal improvement/progression, which is by construction is always changing and "in motion", and improvement of something presumes also it's survival. I see it a bit like a game.

/Fredrik
 
  • #488
To return to the purpose of your tool...

The Foundations of Physical Reality said:
Thus, the problem becomes one of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process.

How do you picture an observer beeing exposed to this datastream? What happens when the observers memory is full, and runs out of memory for you constructions?

/Fredrik
 
  • #489
Please read a little of my conversation with Anssi!

Fra said:
How do you picture an observer being exposed to this data stream? What happens when the observers memory is full, and runs out of memory for your constructions?
(Excuse me for correcting your spelling; it's sort of a compulsion ingrained by my father years ago.) You are clearly misinterpreting what I am doing. I made no claim to understanding how human beings unconsciously solve the problem; all I said is that they obviously solve it on a regular basis which implies it is a solvable problem. Thus the fact that I have solved the problem bears little impact on how the average person does so. In fact, there are a lot of points to persuade one to accept the fact that they certainly do not use my method. In particular, we have the fact that no one (to my knowledge) uses that equation I derived and, secondly, their solutions are often ripe with errors. But they certainly are “solutions”, and dammed good ones at that (almost everyone agrees with “what is real”).

My only point in bringing up the fact that “every living human being” has essentially “solved the problem of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process” was to convince the reader that the problem was solvable. Most serious scientists would hold that the problem is insoluble on the face of it. Why do you think they refuse to even consider the issue?

I still suggest it would be to your benefit to glance over my conversation with Anssi. As far as the question: what happens when the observers memory is full, and runs out of memory for my construction, the issue is quite simple. First, I am not claiming he is using my construction and second, with regard to my construction, anything which is truly forgotten can not possibly influence one's world view. My construct is based entirely on that data which is available and depends not at all on anything which has been forgotten.

Philosophically speaking, the fact that a common humans construct is based on the assumption that their current world view is valid and that anything they have forgotten was consistent with that world view. That itself could be a great explanation for the errors in their world view. The central point here is that a flaw-free explanation of anything must satisfy my equation.

Have fun -- Dick
 
Last edited:
  • #490
( I don't mind if you want to spellcheck - go ahead )

I am well aware that I may misinterpret your intentions, but that's what the questions are for.

You somewhere (I forgot where) defined an "explanation" as a method for obtaining a expectation? This sounds interesting, but I am still not sure if you mean what I think you mean.

Question on definition of expectation: Do you with expectation mean like some probability in frequentists interpretation, define on the current known fact? ie history or past, or whatever is part of your known facts?

Or does expectation refer to the unknown? ie. that what you know, induce an expectation on the unknown? ie. future?

If you _define_ a probability pretty much like some relative frequency on a given, fixed set of facts, then the "expectation" applied to that set is of course exact by definition? Is this what you mean?

Or do you suggest, that the expectation provides us with educated guesses in cases where we lack information?

You said somewhere I think that you make no predictions? But isn't an expectation a kind of prediction? I mean the expectation is not exact, it doesn't tell us what will happen, but it gives us a basis for bet placing - thus there are good and bad expecttaions. Do you somehow claim that your expectation is the optimum one?

Let me ask this: What is the benefit, someone would have, adapting your models, over someone that uses the standard model? Would they somehow be more "fit" (thinking of the analogy of natural selection here).

/Fredrik
 
  • #491
My observer question wasn't intented to restrict itself to human observers. It coul be anything. Even a molecule. Sure it's unclear what I mean with a molecule observing and responding, but I see it as relabeling the words in a "molecule interacting". There are reasons to think that a molecule can not encode arbitrary amounts of information unless getting extremely energetic.

I am just trying to find a practical realistic application of your thinking. I don't care if we call it physics or mathematics or biology, but for me I am interested in understanding reality. My understanding must have a place, and function in the setting of actual reality.

/Fredrik
 
  • #492
Fra said:
( I don't mind if you want to spellcheck - go ahead )
Thank you for your kindness to my compulsions.
Fra said:
I am well aware that I may misinterpret your intentions, but that's what the questions are for.
I had no intention for you to take my comment as a rebuke; I was merely pointing out the source of your difficulty.
Fra said:
You somewhere (I forgot where) defined an "explanation" as a method for obtaining a expectation? This sounds interesting, but I am still not sure if you mean what I think you mean.
The basic reference can be found http://home.jam.rr.com/dicksfiles/Explain/Explain.htm
Fra said:
Question on definition of expectation: Do you with expectation mean like some probability in [frequentness?] interpretation, define on the current known fact? ie history or past, or whatever is part of your known facts?
Essentially yes.
I will suggest that what an explanation does for information is that it provides expectations of subsets of that information. That is, it seems to me that if all the information is known, then any questions about the information can be answered (in fact, that could be regarded as the definition of "knowing"). On the other hand, if the information is understood (explainable), then questions about the information can be answered given only limited or incomplete knowledge of the underlying information: i.e., limited subsets of the information. What I am saying is that understanding implies it is possible to predict expectations for information not known; the explanation constitutes a method which provides one with those rational expectations for unknown information consistent with what is known.
What I am saying is that your explanation of something (no matter what that explanation is about) is the source of your expectations. If I understand your explanation, I will be able to estimate your expectations as a probability attached to the various possibilities. In particular, you need to recognize that the correctness of your expectations is not the issue here. The issue is defining exactly what “an explanation” is and, in my opinion, it is a mechanism for generating expectations. I am defining "an explanation", not "a good explanation". A good explanation would be one with few flaws. An explanation which yields expectations perfectly consistent with the known facts would be a "flaw-free" explanation (what we would all like to find).

In fact many scientific discussions revolve around the inaccuracy of one's expectations. If a scientist understands your explanation of something and is of the opinion that your explanation is wrong, his standard attack will be to point out an expectation implied by your explanation does not fit the facts (i.e., is not very probably correct).
Fra said:
Or does expectation refer to the unknown? ie. that what you know, induce an expectation on the unknown? ie. future?
Your expectations are whatever you expect. The easiest way to express your expectations in a precise mathematical way is to give the probabilities of various possibilities. Have you ever heard of the game “20 questions”? Think of your expectations as your answers to a game of “an infinite number of questions with yes/no answers”. A complete description of your expectations could consist of a probability distribution for your answers: i.e., a number bounded by zero and one for each and every question. If I understood your personal explanation of the pertinent information, I could use that explanation to create an estimate of those probabilities: i.e., I would know what to expect from you with regard to that subject (the pertinent information).
Fra said:
If you _define_ a probability pretty much like some relative frequency on a given, fixed set of facts, then the "expectation" applied to that set is of course exact by definition? Is this what you mean?

Or do you suggest, that the expectation provides us with educated guesses in cases where we lack information?
I would say that the idea includes both; the exact expectations are defined by probabilities zero and one, the educated guesses are represented by numbers elsewhere in the range.
Fra said:
You said somewhere I think that you make no predictions? But isn't an expectation a kind of prediction? I mean the expectation is not exact, it doesn't tell us what will happen, but it gives us a basis for bet placing - thus there are good and bad expectations. Do you somehow claim that your expectation is the optimum one?
Once again, you are clearly misinterpreting what I am doing. I am making no predictions of any kind; I am analyzing the problem of making predictions (estimating the probabilities your explanation should yield). Take a quick look at this response to Anssi.
Understanding the issues presented in that response will go a long way in explaining my approach.
Fra said:
Let me ask this: What is the benefit, someone would have, adapting your models, over someone that uses the standard model? Would they somehow be more "fit" (thinking of the analogy of natural selection here).
We are talking about explanations here (epistemological constructs designed to explain reality). I am looking for logical constraints on those constructs. I take your use of the term “standard model” to imply you are misunderstanding what I am doing. When it comes to setting constraints on explanations, the “standard model” is, “it has to make sense”, a very vague and imprecise statement. Every professor I have ever heard define “an explanation” seldom does more than give a few example explanations and then comment something like “I'm not going to waste my time explaining things to you if you can't understand what an explanation is”. It appears to be an unexamined concept: i.e., what I am talking about is something no one looks at carefully.

Over two years ago I made a post to the thread, "Can Everything be Reduced to Pure Physics" which I think is worth understanding.
Doctordick said:
To put it another way, knowing is having facts available to you (the facts come from the past, not the future) and understanding allows discrimination between good and bad answers (facts you might expect to become available to you in the future). Now the human race has become quite good at this discrimination since all we living things first crawled out of the sea. We are the undoubted leaders in the realm of "understanding" the world around us. And yet no one has come up with a good argument to dismiss the Solipsist position. The fact that we have come so far without being able to prove what is and what is not real should make it clear to you that understanding reality can not possibly require knowing what is real. :approve: This is why every serious scientist (I except myself of course[/color]) has vociferously argued against any rational consideration of the question. Their position is: if we don't know what's real, how can we possibly dream of understanding reality. They hold that we must assume we know what's real. You can see that position promulgated all over this forum! Why do you think they label me a crackpot? :smile:
Fra said:
I am just trying to find a practical realistic application of your thinking. I don't care if we call it physics or mathematics or biology, but for me I am interested in understanding reality.
Well, I was interested in answering the question “What can[/color] we know?” If you cannot answer that question, how can you have any direction to your attempts to understand reality. Again, in my opinion, the “standard approach” to understanding reality is a “guess and by golly” approach with little or no thought given to logical direction. I had proved the validity of my equation over ten years prior to unraveling the first solution to that equation. Prior to discovering a method of finding solutions, it just seemed reasonable to me that, if I could find a solution, that solution should have practical application. When I finally figured out how to solve it, I discovered practical realistic applications up the wazzo (so to speak). For the moment, why don't we not worry about that; we should first comprehend the defense of the definition and the deduction of the equation itself.

It might benifit you to look at this response to some of Anssi's other questions

Have fun -- Dick
 
Last edited by a moderator:
  • #493
Perhaps you also misinterpret some of my questions too ;) some were provocative in order to probe your responses on key points. For obvious reasons I can never be sure I hold the same information as you, but I can say as much that at least some of the things you say makes perfect sense to me and seems closely related to my thinking - this is the part of your explanation or current facts as you put it, implies expectations on the unknown - this bears striking resemblance to optimal inference methods, where one might try to device a relative probability, which I personally call an expectation of the probability becaues you know what you know, but you can only guess what you don't know, thus sometimes the definition of the proper probability space itself gets unclear. Though I have a feeling from first skimming our writings that we have had similarity in thinking early one, but then later on... I am not sure.

Doctordick said:
For the moment, why don't we not worry about that; we should first comprehend the defense of the definition and the deduction of the equation itself.

Ok, I'll again at that later.

/Fredrik
 
  • #494
Ok I'll try to look at http://home.jam.rr.com/dicksfiles/Explain/Explain.htm ...in small pieces

Just to make sure I get it...

Let's for a second ignore the definition of probability itself...

will define the expectations to be the probability that a particular B(tk) will become a member of C: written as P(B(tk)).

So you basically take the expectation of B(tk) to be a probability conditional on C, right? So using the notions of conditional probabilities a bit loosely, do you object if I write P(B_{t_k}|C) to be read as the conditional probability of B_{t_k} given C? Where this definition of probability includes your eplanation and it's exploit as inducing a probability?

Loosely speaking, this make sense, but there are still issues here. The question is what we mean by probability - my personal main objection to standard QM, is that not even the probability is known exactly, it is only an expectation of the probability, basically probability of probability.

Reflections?

/Fredrik
 
Last edited by a moderator:
  • #495
I'm not sure I understand how you introduce tau. My associations is an absolute frequency of x, or something else?

Edit: Frequency in B that is.

? no?

/Fredrik
 
  • #496
Sign of life

I thought I'd drop in sign of life to the thread before leaving for a week again (albeit I'll have some access to internet, but probably very little time).

I'm little bit disappointed that I haven't had time to cook up a reply in a while, but then I can't be too disappointed since I've spent hours today and the other day going through the older posts reeaally carefully, and I can say it has been beneficial; I have been able to answer some of my questions all by myself.

Later!
-Anssi
 
Last edited:
  • #497
Fra said:
Perhaps you also misinterpret some of my questions too ;) some were provocative in order to probe your responses on key points.
Perhaps I do and, if so, I would like to be corrected as communications with common language is difficult at best; I much prefer mathematics as meanings are usually quite universal and generally precise.
Fra said:
For obvious reasons I can never be sure I hold the same information as you, but I can say as much that at least some of the things you say makes perfect sense to me and seems closely related to my thinking - this is the part of your explanation or current facts as you put it, implies expectations on the unknown - this bears striking resemblance to optimal inference methods, where one might try to device a relative probability, which I personally call an expectation of the probability because you know what you know, but you can only guess what you don't know, thus sometimes the definition of the proper probability space itself gets unclear.
One problem we are apparently having here is that you are thinking in terms of epistemological constructs themselves whereas I am concerned with “representation” of epistemological constructs. I have found that the difference between these two issues is very difficult to communicate. That is one of the reasons I keep bringing up my conversation with Anssi; I am pretty well convinced that he has managed to get his mind past that barrier.

The concept “optimal inference method” is itself the result of an epistemological construct (it is a concept defined within your world view). In order for you to communicate to me what you mean by that phrase, you would have to do your best to define what you mean by the expression. That act itself would involve my coming to understand what you mean and accomplishing that result (to the satisfaction of both of us) would require a great many assumptions on my part. Essentially, in order for me to understand what you are saying, requires me to solve the problem which I have posed to examine. Now, I am not saying that I don't understand what you are saying; what I am saying is that my understanding of anything must be held as suspect. My intention was to “make no assumptions” and, under that constraint, all I have to work with is my definition of “reality” (which I define[/color] to be the set of “valid ontological elements” on which my world view is built) and my definition of “an explanation” ( which I define[/color] to be “a method of obtaining expectations from given known information”).

Certainly, the issue of “epistemological constructs” has already reared it's ugly head but I will suggest that that is only because you want those terms in my definitions defined. Ontology is commonly defined to be the study of “being” (which is most often taken to be “what exists”: i.e. reality). What I am saying is that I am going to use those symbols, “reality” and “valid ontology”, to reference what it is that I want to understand (as my meanings seem to be at least quite similar to the common intention of those words). This evades being an epistemological construct by the very fact that I have specified it to be undefined[/color] (it only becomes defined with regard to a specific epistemological construct). The “given known information” is to be taken to be that “valid ontology” which constitutes reality. Or rather, symbolic reference to those “valid ontological elements”.

That leaves the issue of “expectations”. In this case, I use the concept of probability as used by mathematicians (I have earlier said that I will use the constructs of mathematics as given: i.e., defined abstract systems and operations well understood by many people).
Doctordick said:
I will make much use of Mathematics without defense or argument. In essence, it is quite clear that mathematicians are very concerned with the exactness of their definitions and the self consistency of their mental structures. I suspect mathematics could probably be defined to be the study of self consistent systems. At any rate, their concerns are exactly those which drive my work; I am merely attacking a slightly different problem.
You were concerned with my definition of probabilities. As you said, one can only guess what they don't know; however, that is of no concern to my analysis in any way. All I am saying is that expectations can be seen in terms of the mathematical concept of probability. It makes utterly no difference how those expectations were arrived at; probability gives us a symbolic way of expressing them; it is a well understood method of communicating expectations.

That is to say, if you have explained something to me and I come back with a statement of what I would presume was the probability distribution of a set of consequences of your explanation; and you agreed with me that the distribution was consistent with your explanation, we would both conclude we were communicating: i.e., that I appeared to understand your explanation. This is, in essence, exactly what stands behind my definition of “an explanation”: i.e., it provides a mechanism for generating that probability distribution. It is essential that the means of developing that distribution be kept as an open unconstrained issue.

What is important here is recognizing that actually generating a probability distribution of any kind requires an explanation and the explanation usually requires an epistemological construct (a theory). What I want to do is proceed as far as possible without resorting to any epistemological construct of any kind.

That is why I introduced the idea of the ”what is”, is “what is”[/color] explanation of reality. It is the only explanation of reality of which I am aware which requires no epistemological construct of any kind. Wanting you to understand that issue was a strong reason I gave the earlier link to my note to Anssi
Fra said:
I'm not sure I understand how you introduce tau. My associations is an absolute frequency of x, or something else?
I do not understand your question. First of all “how” I introduce tau is a pretty insignificant issue, I just throw it in as an index referring to “invalid ontological element” (a convenient figment of my imagination). Why I introduce it is a much more pertinent question. You need to look at another communication I had with Anssi which I think would clear the issue up a bit. Consider the following excerpt:
Doctordick said:
Another good example would be that family tree of the primates I brought up. How would you show multiple entries for the same species? You already use horizontal displacement to indicate different species and vertical displacement to indicate time and you would have to include another axis if you wanted to show the time change in populations.
I hope you know that the little blue carrot to the right of the person being quoted is a link to the quote? I say that because it would be worthwhile for you to read that whole post.

Tau is an index providing the power to indicate multiple occurrences with the same x, t indices. This we need in order to be able to represent an arbitrary explanation.
AnssiH said:
I have been able to answer some of my questions all by myself.
That strikes me as highly probable; I suspected getting you over the hump of seeing my perspective was the real issue. Actually, once you understand where I am coming from, what I am saying is quite simple. Perhaps you could help me communicate with Fredrick? I can certainly use the help.

Also, don't worry about not responding quickly; your life is a much more important problem than this stuff. This is for the fun of understanding. :smile:

Have fun -- Dick
 
  • #498
Doctordick said:
The concept “optimal inference method” is itself the result of an epistemological construct (it is a concept defined within your world view). In order for you to communicate to me what you mean by that phrase, you would have to do your best to define what you mean by the expression.

Of course, you are absolutely right. This is something I'm working on... but I think it would get messy for my to describe my theories here. At least in this thread I suggest we stick to your theory. My main curiosity here is if we share some thinking here or not. From my first reading I think we do, but still differ.

Not go into this now, but briefly, the basic idea of "optimal inference method", is that once you acknowledged that the problem is your incomplete knowledge, and you can't ever KNOW about the future. This reduces the problem to, make a guess about the future. This is what physics does, we guess and let experiment discriminate the good guesses from the bad guesses.

However in the optimal inference methods you go one step further, and try to somehow define the "best possible guess", or best possible "probability distribution" given your prior information, and moreover one tries to find the optimum way to update the expectations in response to additional information. (think bayes rule in bayesian probability, but generealise it). The generalisation can also produce expectation of "dynamics" and can define time and space in terms of degrees of distinguishability between events. One can also try to define time in terms of this. But I can't explain this now.

In my thinking there are some key components:
(1) Representation of expectations
(2) Communication with environment

The representation is changing in response to communications. I'm trying to find the best solution to this, using minimum assumptions.

The dynamics arises as there is communication between the known and the unknown.

But I rather not get into this now, and not in this thread. I just wanted to say that I've got some own thinking, and I did see similarities to your thinking at first glance. But I'm still working on the formalisations so I don't yet have any paper of site to point you to. This is why it's too early for me to explain the details of this. But others are working on related things, Ariel Caticha is one.

For me to really even try to explain this, it will be a big paper. And I hope it will come, but I've got a lot of work yet.

Doctordick said:
Certainly, the issue of “epistemological constructs” has already reared it's ugly head but I will suggest that that is
only because you want those terms in my definitions defined.

There is clearly a universal problem of choosing definitions. You may choose yours differently than mine, and there is no problem. I guess still, the ultimate proof of success is in the survival and fitness of any ideas. This goes for mine as well as yours. There is IMO no need for use to agree on this.

This is why I don't see much point in spending all my time explaining my thinking to others. I spent more times arguing on the internet some years ago, but the feedback was poor. My strategy is to work out my ideas in silent, and when I convinced myself I'll make sure to find an application for it. There are many things you can do if you've got a nice model. Artificial intelligence software, information processing. It would be much easier to convince by showing success.

Not to ignore yor other comments(!) I might get back later... I actually also appreciate a slower pace in the discussions here... since I've got a normal job and physics is a hobby for me... I constantly fight ot get time :)

I appreciate your depth of thinking at any rate (even if we end up disagreeing).

/Fredrik
 
  • #499
Fra said:
Of course, you are absolutely right. This is something I'm working on... but I think it would get messy for my to describe my theories here.
Again, you have totally missed the point of my response. When I said, “one problem we are apparently having here is that you are thinking in terms of epistemological constructs themselves whereas I am concerned with “representation” of epistemological constructs”, I was referring to the fact that you are not even considering the fundamental problem under discussion. The fundamental problem is, how does one construct “a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process”. The issue is that you are beginning with the assumption that you have already solved that problem (which is totally equivalent to ignoring it). You start by assuming your world view is valid.
Fra said:
At least in this thread I suggest we stick to your theory.
Again I seem to have great difficulty communicating the fact that what I am presenting is not a theory (theories are[/color] epistemological constructs). I tried to make that clear in that private note I sent you but apparently you misunderstood what I was saying.
The first comment I would like to make is that what I present is not a theory[/color] (a fact which seems to be impossible to communicate). It is no more or less than a way of organizing what we know without knowing what it is that we know. Somewhat analogous to the Dewy decimal system of organizing a library; the point being that the Dewy decimal system does not depend on knowing what will come to be in that library: it is no more than a procedure for handling the information when it gets there.
The fact is that I have discovered an analytical solution to “the problem of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process”. I am trying to communicate that solution to you so that you can evaluate the logic of the solution for yourself.

A profound issue of significance here is that my solution to the problem must include your theory. In fact, no theory of anything is to be excluded by my attack. This is the reason for my comment that, “In order for you to communicate to me what you mean by that phrase, you would have to do your best to define what you mean by the expression.” What I meant was that, in order to represent your theory under my definitions, I would need all of the required communications necessary to define absolutely all of the significant issues in that theory (I was not asking you to clarify these issues). I would need to be able to construct your communications as a specific ”what is”, is “what is”[/color] table of information which was to be “understood”. The only other option is to make assumptions and, if assumptions are to be made it is quite possible that those assumptions would be wrong. My construct is a logical exact construct and has some very specific consequences.

I think I made the central issue clear to Anssi back in April of this year: post #398 in this thread.
Fra said:
There is clearly a universal problem of choosing definitions. You may choose yours differently than mine, and there is no problem.
The problem in “choosing definitions” is communicating what is meant: i.e., that process itself means we are immediately dealing with epistemological constructs (see my above post to Anssi); ergo,
Doctordick said:
My intention was to “make no assumptions” and, under that constraint, all I have to work with is my definition of “reality” (which I define[/color] to be the set of “valid ontological elements” on which my world view is built) and my definition of “an explanation” ( which I define[/color] to be “a method of obtaining expectations from given known information”).

Certainly, the issue of “epistemological constructs” has already reared it's ugly head but I will suggest that that is only because you want those terms in my definitions defined. Ontology is commonly defined to be the study of “being” (which is most often taken to be “what exists”: i.e. reality). What I am saying is that I am going to use those symbols, “reality” and “valid ontology”, to reference what it is that I want to understand (as my meanings seem to be at least quite similar to the common intention of those words). This evades being an epistemological construct by the very fact that I have specified it to be undefined[/color] (it only becomes defined with regard to a specific epistemological construct).
Fra said:
I guess still, the ultimate proof of success is in the survival and fitness of any ideas. This goes for mine as well as yours. There is IMO no need for us to agree on this.
With regard to your ideas, I would agree with you. With regard to my presentation, I would not. I am presenting a logical deduction, not a theory. Either that deduction is a logically valid deduction or it is not. If we disagree on the validity of a logical step, one of us is wrong! There is no room for opinion there. I would love to discuss any error in my deductions which you might find. To date, every case I am aware of has been simple misinterpretation of what I am saying (the “theory” thing being a case in point).

It is my opinion that my real difficulty here is the fact that I am dealing with “denial” on the part of the intellectual community. Most everyone seems incapable of comprehending the fundamental problem of intelligence itself. It may be simply too abstract for them to deal with.

Have fun -- Dick
 
  • #500
Hey Doctordick, I completely missed your private message to me sorry! (Noticed it now when you drawed my attention to it) I hardly expect any private messages on here so my distribution of attention was close to null on the message message box ;) I'm sorry for overlooking this...

Doctordick said:
I have now read the link you offered and actually find little in his thoughts which impact on my analysis. The first comment I would like to make is that what I present is not a theory (a fact which seems to be impossible to communicate). It is no more or less than a way of organizing what we know without knowing what it is that we know. Somewhat analogous to the Dewy decimal system of organizing a library; the point being that the Dewy decimal system does not depend on knowing what will come to be in that library: it is no more than a procedure for handling the information when it gets there.

Ok, it's not a theory. Thus I assume it is meant to follow from pure reason/logic alone, right?

Doctordick said:
The fundamental problem is, how does one construct “a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process”.

Ok, I think I see. But I also suspect that you will tell me I got it all backwards again :)

Anyway:

Since you are talking about streams of data, I assume that your description is formed from a subject, and observer or what you may label it, without getting into the issues of what an observer really "IS". Somehow the observer is an implicit condition.

So, you somehow picture the situation where this observer, is faced with a stream of data. Why or how this data comes about is not known. It's somehow just a matter of fact. And now you take as the problem to make a rational model for data/facts as it arrives?

If that's close, my first questions is what to do you mean by rational model? What would for example an irrational model be like, in your terminology?

( I deliberately try to keep the posts short for clarity, especially until I nderstand each other, to prevent draining of attention from multiple focuses. Also, please don't let me disturb your parallell discussion with Anssi.)

/Fredrik
 
Back
Top