If we are talking pure math using classical analysis, then the derivative of your input should be a step that does not exist at ##t=t_0##, and the second derivative would not exist at ##t=t_0## and be zero everywhere else.
Of course a strictly one-sided limit will never output "does not exist". But I have never seen an author define LTI systems as mappings from ##R##, to ##\{R,## does not exist ##\}## before. Have you? A system has a domain and a range - if we are using classical analysis then the logical domain of the differentiator would be a space of differentiable functions. If we allow generalized functions, as EEs very often do (usually without explicitly mentioning it), then there is no problem. The first derivative of your input is a well-defined step and the second derivative is ##\delta(t-t_0)##.
This discussion has made me go back and look at the more precise definitions of causality that the more mathematical treatments of linear system theory often use, since the 'output doesn't depend on future inputs' notion provided in introductory treatments leaves a lot of room for interpretation. Let ##x_0(t)## and ##x_1(t)## be two inputs with corresponding outputs ##y_0(t)## and ##y_1(t)##. The definition I usually see is that a system is causal if ##x_0(t)=x_1(t)## for ##a\leq t < b## implies ##y_0(t)=y_1(t)## for ##a\leq t < b##. Under this definition the differentiator is causal, even if we use classical analysis and feed it your input. No need to try and use a one-sided limit to appeal to an imprecise definition of causality.
Edit: this is wrong. When ##t_0=a## then the differentiator would be acausal for your function under this definition, assuming we use classical analysis and it makes any sense to force systems to do operations that don’t exist. The actual definitions I saw only had the ##t<b## part, and I added the ##a\leq## to make it the same form as the definition in the next paragraph. Oops! I know of one book though, state-space and input-output linear systems by Delchamps, that uses a slightly different definition. He uses more complicated notation to allow for time-varying systems and such, but with the prior notation his definition basically is that a system is causal if ##x_0(t)=x_1(t)## for ##a\leq t < b## implies ##y_0(t)=y_1(t)## for ##a\leq t \leq b##. In his book the definition of any system must also include a specification of the vector space of input functions. So a differentiator that is defined with differentiable input functions would therefore be causal, and a differentiator defined with inputs that are not differentiable everywhere (and I'm not certain this would even be allowed) would be acausal.
So if you want to feed a differentiator non-differentiable functions and demand a 'does not exist' output, then the differentiator could be acausal depending on your definition of the system, the definition of causality and whether or not you are happy to use generalized functions. Your function creates problems only at one single point, but mathematicians can construct functions that are continuous everywhere and differentiable nowhere. If we also allow those inputs then the differentiator can have inputs that create outputs that simply don't exist anywhere. Does this make it an illegitimate system to begin with? If so, then a nice proper system defined with a standard Riemann integral, ##y(t)=\int_0^t x(p)\, dp## also has problems, since those same mathematicians can construct bounded functions that are not Riemann integrable...
jason