Quote by Alamino
Filling all the space in a random walk is akin to saying that the probability of return to a point is 1, what happens (in flat space) for dimension [itex]d \leq 2[/itex]. The path per se is always onedimensional, but the capacity of filling all the space decreases with the increasing of the dimension.

I think you are saying here that there are two ways to impose a cutoff. The CDT approach is to push QMkinky space to the edges of the simplices which hides the kinkiness at large scales and gets revealed at small scales.
What you describe sounds more like a coarsegraining  a cutoff imposed from above. So you take a point (with zero dimensions) and say wait for it to get back to "exactly" the same place. In reality, this would be infinitely unlikely. But you are deciding that close is near enough after a certain stage. So this is putting the location in a coarsegrain box and saying if the point reenters the box  at any one of the box's infinity of locations  then you have your return and the clock can be stopped.
Again a reduction in dimension would be imposed by the model rather than generated by the model as you are effectively saying a 3D solid (a box of space) is a single zeroD point for the sake of your measurement needs.
What I would find more convincing would be models in which the Planckian realm was treated as a hyperbolic roil  ye olde spacefoam  and a Feynman topological averaging to flatness emerges with the context of scale.
In effect, an isolated Planck scrap of spacetime would fluctuate with any curvature. But surrounded by other scraps, it knows how to line up. Context has a smoothing effect  as in any SO story such as a spin glass.
So in this view, the hyperbolic fluctuations on the QM scale are a bit of a fiction. They don't actually occur because spacetime has sufficient size  a relativistic ambience  to iron out such fluctuations. It would only be an isolated Plancksized scrap disconnected from an actual Universe that could behave in a hyperbolic fashion.
This is why attempts to merge QM and relativity generally seem to get things backwards. The QM wildness is a behaviour that emerges as there is a loss of relativistic context. So a quantum gravity theory would be a model of gravity (a contextual feature) in a realm too small to support a stabilising context.
In this view, it would be a good thing that the two can't be completely merged, only asymptotically reconciled. If QM and relativity are boundaries or limits that lie in opposite directions, then the nonsense of UV infinities is what we should expect if we try to imagine a realm so lacking in scale that it has no idea which way to orientate itself and so apparently (according to the calculations) is curving in all directions at once.
Other ontologies would suggest that really an isolated Planckian scrap is just as much curving in no particular direction at all. It's behaviour would be vague and meaningless rather than powerful and directed.