I am not a specialist.
First, I suggest you to test a conventional PID to control a delay system, to learn what might happen.
Intuitively, this may lead to instabilities.
The corrections will be increased without use during the delay time.
Of course, the instability can always be reduced by reducing the feedback.
But reducing the feedback will also reduce the efficiency and in the end keep a "lag error".
Avoiding this instability while keeping efficiency needs a better way to calculate the correction.
The correction must take previous corrections into account, and specially their expected effect on the system output.
Intuitively, this suggest the need for a king of "memory" associated with the PID.
I think, it is more precisely a model a the "plant" that is needed to calculated the correction.
A correction would be needed if the real system shows a difference with the model system.
In computer-based feedback systems, algorithms might easily be adapted for delays.
In a high frequency range, for analog system, delay lines are available, and can also act a a kind of memory.
Such a delay line could be used to "model the system" too and make a delay-PID feedback possible.
However, when no delay lines are available, how can we proceed?
Without more information from the Aachen paper, it is hard for me to guess more about their system.
However, some suggestions might be found there:
http://msc.berkeley.edu/PID/modernPID3-delay.pdf
It might -in the end- be a matter of approximating the delay line in some way.
(no way to approximate an "advance line of course!)
See also some thesis on this topic, like:
http://scholarworks.sjsu.edu/cgi/viewcontent.cgi?article=5034&context=etd_theses