Situation: I had tired a 1000-data generated by random error(i.i.d.), then I sub it into different unit root tests. I got different results among the tests. The following are the test statistics I got: For R project: adf.test @ tseries ~ -10.2214 (lag = 9) ur.df @ urca ~ -21.8978 ur.sp @ urca ~ -27.68 pp.test @ tseries ~ -972.3343 (truncation lag =7) ur.pp @ urca ~ -973.2409 ur.kpss @ urca ~ 0.1867 kpss.test @ tseries ~ 0.1867 (truncation lag =7) For MATLAB: (adf test) ~ -0.43979 Questions: 1. Why there are different test statistics? Even in tests under same test name, say Phillips-perron test (pp.test & ur.pp), they have different test statistics. 2. Don't the Phillips-perron test based on the Dickey-Fuller distribution table? How the value being so negative (-9xx)? 3. What is truncation lag? Is it the same with lag terms?