Holography, bekenstein and information distance

In summary, Tom suggests that we need to find a formulation of the holographic principle that is more independent of geometric notions. He suggests that this might be possible by reconstructing the geometric notions from a deeper principle. This comparison is interesting and suggestive, but is just a toy model. There is still much work to be done in this direction.
  • #1
Fra
4,104
606
Given the recent discussion about a possible formulation of holographic principle, and also what geometric notions suchs distance and area may mean without prior metric I thought I'd throw in this suggestive comparasion out for discussion. I feel the discussion is still too much tied to geometric notions, I think we nee dto be more radical.

Tom said he wante to understand the AdS/CFT beyond AdS, but how about trying to find a formulation that is more indepdenent of geometric notions altogheter, where we might even reconstruct the geometric notions from a deeper principle?

If we for a second, as a simple example consider the information state of an information processing observer to consist of "prior probabilities" of certain events encoded in a finite discrete form, meaning the values take on constrained rational values, not the full [0,1] contnuum, and the this observer wants to know the conditional probability P for "observing" a future sequence of M k-valued events ,the the multinomial distrubition gives.

[tex]P= \left\{M! \frac{ \prod_{i=1..k} (m_{i}/M)^{(m_{i})} }{ \prod_{i=1..k}m_{i}! } \right\} e^{-M S_{KL}}[/tex]

Define S = - ln P as usual then

[tex]S = M S_{KL} + ln w[/tex]

Also define D = maximum S_KL (normally this is infinitiy, but for a finite complexity observer, there is a maximum)

[tex]S \leq MD - ln w[/tex]

ln w -> 0, as the complexity of the observer is large so we can ignore it for the simple argument

[tex]S \leq MD[/tex]

It's simply interesting to compare this to the Bekenstein bound (ignore the constants)

[tex]S \leq ER[/tex]

The interesting thing here is that we get an interpretation of the variables.

S - is the "entropy" of the system beyond the observable horizon, but calculated relative to the finite computational systme of the observer!

E - is the number of events, or the "size" of the future-sequence. The amount of "data", rather than amount of information.

R - is the directed maximum "distance" from the observer measured looselt in terms of bits - how may "bits" do the observer need to flip - to the distinguishable horizon of possible states from the point of view of the inside observer.

This is to me, a pure INSIDE view, a "cosmological" view. Ie. humans looking as cosmo horizon. Now if we instead consider the flip side of this, we humans looking into a microscopic black hole the situation is flipped as the observer is on the "wrong" side of the horizon, so somehow there seems to be then that both the microscopics BH horizon and the cosmo horizon enters the same equations.

Also this is just a toy model, and yes S_KL is no proper metric but that's a more detailed discussion, but I'm curious to hear what progress we can expect in "this direction". Note that the above is just to fuel associations, I'm not making a precise hypothesis here, just noting that there are many extremely suggestive things here that has nothing to do with ST, AdS or anything else; but which might eventually help us understand how and why space is emerges.

Constructive Ideas in this direction?

/Fredrik
 
Last edited:
Physics news on Phys.org
  • #2
You don't mention Baez recent article. Was that already discussed in some other thread, Fra?

http://arxiv.org/abs/1010.2067
Algorithmic Thermodynamics
John C. Baez, Mike Stay
20 pages, 1 figure
(Submitted on 11 Oct 2010)
"Algorithmic entropy can be seen as a special case of entropy as studied in statistical mechanics. This viewpoint allows us to apply many techniques developed for use in thermodynamics to the subject of algorithmic information theory. In particular, suppose we fix a universal prefix-free Turing machine and let X be the set of programs that halt for this machine. Then we can regard X as a set of 'microstates', and treat any function on X as an 'observable'. For any collection of observables, we can study the Gibbs ensemble that maximizes entropy subject to constraints on expected values of these observables. We illustrate this by taking the log runtime, length, and output of a program as observables analogous to the energy E, volume V and number of molecules N in a container of gas. The conjugate variables of these observables allow us to define quantities which we call the 'algorithmic temperature' T, 'algorithmic pressure' P and algorithmic potential' mu, since they are analogous to the temperature, pressure and chemical potential...
..."
 
  • #3
No, it wasn't. I found it on arxiv, but I didn't post on the archival thread. I didn't find any citations to QG papers...
 
  • #4
marcus said:
You don't mention Baez recent article. Was that already discussed in some other thread, Fra?

http://arxiv.org/abs/1010.2067
Algorithmic Thermodynamics
John C. Baez, Mike Stay

Thanks Marucs, I'll check that paper! I wasn't aware of that. Baez sometimes has had expositions in the past I'll check and see if that connects to my ideas.

/Fredrik
 
  • #5
  • #6
MTd2 said:
... I found it on arxiv, but I didn't post on the archival thread. I didn't find any citations to QG papers...

I admire your discipline and restraint. It was a borderline case---we risk making the archival thread, the nonstring QG bibliography, less useful if we clutter it with stuff that doesn't belong. So compliments on deciding not to include it. In this case I was less careful than you!
I jumped for it, without any clear idea how it could fit into a quantum gravity or geometry context. I suspected that for whatever reason we would want to have it to refer to in discussion.
 
  • #7
marcus said:
without any clear idea how it could fit into a quantum gravity or geometry context

Although it seems this may has no clean connection to LQG, the associations I wanted to induce with this thread is to note that there are IMHO ways in which these information approaches does fit into this QG-quest (not specifically LQG).

The suggestive connection to bekenstein bound is just one thing.

The notion of inertia as a resistance to change in information updates is also another point, with furthermore associates information updates to inertia, not too unlike Penrose connection between collapse and gravity - though still a bit different.

The idea that two communicating inference system faced a universally attraction that can be understood in terms of a mutual increasing agreement which reduces the distance measured in terms of some kidn of information divergence. Here hte assymmetry in the information distance may even help explain why the less complex system changes more, than the more complex one.

I just see a list of interesting things that I have a hard time to see are coincidental, and that does connect both to communication/measurement as well as to inertia and gravity.

Though there is certainly a long way from here to something more worked out. But it seems clear and I think people would agree that there is a need for more work on these more generic things... such as understanding "entropy", "action", "inertia", "temperature" as well as geometric notons as distance, area etc in terms of computational and coding system.

The latter thing is them major difference as I see between "intrinsic measures" from geometry - which are more like a property of the geometry, to be found out be inside observers; and "intrinsic computations" that are measures that need to be constructed and encoded by in inside observer; which is of course subsystems - thereof the natural "holographic connection". Here is also a discriminator between structural realists, which usually doesn't care WHERE their information is encoded. When I reject even structural realism of physical law, all that means is that I require physical law to be encoded in a subsystem of the universe.

/Fredrik
 
  • #8
marcus said:
You don't mention Baez recent article. Was that already discussed in some other thread, Fra?

I skimmed the paper and while maybe the paper doesn't solve that many questions, I like it. It's the kind of work I think we need more of and it does represent the direction I have in mind so thatnks for the think!

In particular are the links between energy cutoffs and computation time very much in the right direction. I never used the terms algoritmic entropy but indeed the entropy I defined can be seen as that:

The observer is the "computer" that processes information; and at each time, the state of the system rates the "entropy" of it's alternative information states by considering "potential future sequences" (similar to the binary string idea of computing).

Some extensions of this is that I think that the computational system of the observer, encodes also physical law; and in particular is the "processing" entropic in nature, so that the "processing" is pretty much driven by entropy, and thus is "entropic". It's just a matter of perspective, wether it's just noise or not.

The question is also, what selective mechanisms that we can identify on the processing unti itself (the observer; and thus laws of physics) so as to understand how the processing units itself can evolve and even grow more complex.

Imo, the connection to gravity is quite clear, it's just that it's still a long way to make a specific connection.

So the "cutoff" of energy means that unless the processing unit has completed processing the past, before new future arrives, well there is overflow that can't be handled. One can also say that it means that the limited processing power if we put it like that, limits how complex actions and output we can have.

Here's also a path to unification, if we consider what "interaction" laws between interacting computers that was even possible back when there was only very simple systems with almost zero computational power. To classify these interactions would ideally be analogous to classifying interactions starting from the TOE point.

/Fredrik
 
  • #10
I think more work is needed to make sense out of the "log computing time" ~ Energy. In the simple idea I gave there is no notion of time.

The algorithmic information view is interesting but I suspect a lot more work is needed. Also some of the basic ideas that the koglomorov complexity of a string is unique up to a constant depending on the programming language is based on the assumption that there always exists a interpreted between the languages. Now in computer science and ACTUAL computing, this is reasonable as somehow everything has to be translated into machine code of the cpu - so "structural realism" in this sense is easy to accept.

But in the analogy I imagine, this seems to be more difficult.

The "interpreters" can IMO loosely speaking be associated to the observer-observer transformations. The problem is that even these relations are observer dependent, since it takes another observer to infer observer invariance and the transformations that defines it, and this observer still has finite computational resources as well - so at some point it's not even possible to establish in finite time an interpreter?

/Fredrik
 
Last edited:

1. What is holography and how does it work?

Holography is a technique that allows for the creation of three-dimensional images using a laser beam and special photographic film. This process involves splitting the laser beam into two parts: the object beam, which illuminates the object, and the reference beam, which is directed at the film. When the two beams intersect, they create an interference pattern that is captured on the film, resulting in a hologram.

2. What is the Bekenstein bound and why is it important in holography?

The Bekenstein bound is a theoretical limit on the amount of information that can be stored in a finite region of space. This bound is important in holography because it is used to determine the maximum amount of information that can be encoded in a given hologram. This concept is also closely related to the holographic principle, which suggests that the information in a three-dimensional space can be encoded on a two-dimensional surface.

3. How does information distance relate to holography?

Information distance is a measure of the difference between two sets of information. In the context of holography, information distance is used to compare the amount of information contained in a hologram to the amount of information in the original object. This allows for the evaluation of the quality of a hologram and the accuracy of the information it contains.

4. Can holography be used for data storage?

Yes, holography has been explored as a potential method for data storage due to its ability to store large amounts of information in a compact space. However, current technology and methods are still being developed to make this a practical and cost-effective solution for data storage.

5. What are the practical applications of holography and information distance?

Holography has a variety of practical applications, including in security measures, 3D imaging, and data storage. Information distance, on the other hand, is used in fields such as data compression, machine learning, and cryptography. In combination, these concepts can be used to improve the efficiency and accuracy of data storage and transmission, as well as enhance visualization and analysis of complex data sets.

Similar threads

  • Beyond the Standard Models
Replies
14
Views
3K
  • Beyond the Standard Models
3
Replies
94
Views
29K
  • Quantum Physics
Replies
28
Views
3K
  • Astronomy and Astrophysics
Replies
5
Views
2K
  • Special and General Relativity
Replies
10
Views
2K
  • Special and General Relativity
Replies
23
Views
2K
  • Special and General Relativity
3
Replies
75
Views
3K
  • Special and General Relativity
Replies
10
Views
1K
  • Beyond the Standard Models
2
Replies
61
Views
10K
  • Beyond the Standard Models
Replies
3
Views
4K
Back
Top