One question I have about this topic is whether there is actually any link between CTCs and paradoxes, or whether the same paradoxes can arise in a universe without CTCs.
Historically, a lot of the motivation for wanting to prove things like the chronology protection conjecture was that it was felt that CTCs would lead to paradoxes such as the grandfather paradox, where you go back in time and kill you grandfather before he ever meets your grandmother. There was a group at CalTech about 20 years ago working on investigating whether this type of paradox really is a paradox:
http://authors.library.caltech.edu/3737/
http://authors.library.caltech.edu/6469/
I don't know whether this research program continued, petered out, or failed, or what, but the thrust of it was to show that CTCs don't necessarily lead to paradoxes. They worked with simple models like billiard balls going through wormholes.
There are also links between CTCs and the theory of computation. For instance, classical problems in computer science, like factoring large numbers, become easier if you have CTCs:
http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1991/TempComp.html In terms of computation, we could hope that the laws of physics would allow perfect prediction of the future based on knowledge of initial conditions, in the sense intended by Laplace: "Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective positions of the things which compose it...nothing would be uncertain, and the future as the past would be laid out before its eyes." GR messes up Laplace's dream by allowing naked singularities such as the big bang, which are inherently unpredictable, and CTCs, which make it impossible to define the notion of initial conditions.
But say we live in a universe where there are no CTCs and no naked singularities other than the big bang, and suppose we have comprehensive initial data and can use a powerful computer to make predictions in the sense intended by Laplace. Then in theory it ought to be possible to predict that tomorrow I will eat an egg salad sandwich for lunch at the cafeteria and die of food poisoning. I get the prediction of this event out of the computer, so of course I call up the cafeteria and warn them not to serve any egg salad tomorrow, and I certainly don't eat any of it myself. This seems to me to be exactly equivalent to the time-travel paradox that arises if I die of food poisoning, my wife hops in the time machine and travels back in time, and she warns me. (Because she warns me, I don't eat it. But then because I don't eat it, she never gets the information that it was creeping with E. coli, so she never goes back in time and warns me.)
There is an interesting paper on this kind of thing by Wolpert, Physica D 237, 1257-1281 (2008),
http://arxiv.org/abs/0708.1362 , where he claims to put certain limits on Laplace-style inference machines that are "independent of the precise physical laws governing our universe." One of his results, which he jokingly calls the "monotheism theorem," states that every universe can have at most one inference machine. (If there were more than one, then each could predict the other's behavior, and he shows that that's impossible by a Cantor diagonal argument.) There is a short and nontechnical discussion of Wolpert's work here: P.-M Binder, Theories of Almost Everything. Nature 455, 884-885 (2008),
http://www.astro.uhh.hawaii.edu/PhilippeBinderResearchPage.htm . One thing that doesn't quite make sense to me about Wolpert's paper is that he seems to take time as a primitive concept, to assume that the real number line is a model of it, and that simultaneity is well defined. All of this seems relativistically invalid to me, which makes me doubt his claim that his results are "independent of the precise physical laws."