Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Empirical tests on DNA sequence

  1. Dec 28, 2006 #1
    I am doing some biology research and need to check if a DNA sequence for complexity and repetitiveness, using Empirical test like Frequency test, Serial test and etc.
    For that I need to represent or convert the sequence into scalar.

    My question is basically, how to apply those tests on DNA sequence?
     
  2. jcsd
  3. Dec 28, 2006 #2

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    See http://www.bcu.gub.uy/autoriza/peiees/jor/2006/iees03j3030806.pdf [Broken].

    A test like runs test is applicable to binary sequences. But DNA sequences are multiple valued (e.g. GATTACA). As a first approach, you could separate the 4 codes into 2 binary sets, e.g. "G or A" versus "T or C" and apply the runs test. Then regroup into "G or T" versus "A or C" and re-run. Finally, run "G or C" versus "T or A." If all of these groupings are consistent with randomness, then you could "conclude randomness." But this test would be relatively strong in that you could not conclude randomness unless all of the groupings are consistent with randomness.

    Fortunately there is a more useful test. I can order the letters lexicographically (alphabetically) so that A > C > G > T. Then I can convert a sequence like GATTACA into G < A > T > T < A > C < A and re-write it as 011010 by writing "0" for "<" and "1" for ">." Since this is a binary sequence, I can apply the runs test. If there is any other ordering that you can think of as a biologist (which I am not), say G > A > T > C (perhaps on some luminal frequency scale), then by all means use that rule to convert the letters into 0's and 1's.
     
    Last edited by a moderator: May 2, 2017
  4. Jan 2, 2007 #3
    Thanks EnumaElish.

    About frequency test,
    Is this going to be a right approach for this test:

    Count the frequency of a every letter (A,C,G,T) in sequence.
    And apply a chi-square, where number of categories is 4, p =1/4 with number of observations that fall into every category and n as sequence length.

    ??
     
  5. Jan 2, 2007 #4

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    You are right; a Chi-square test can be used, with 3 degrees of freedom. The null hypothesis is a uniform distribution with expected frequency E(i) = p(i) = 1/4 for i = A, C, G, T.
     
  6. Jan 2, 2007 #5
    EnumaElish,

    What other tests (there are many of them) I can apply on dna. Test that don't need a binary sequence as an input.

    And if all others, or the best of test are indeed work with bits, can you suggest more ways to represent sequence as bits.
     
  7. Jan 2, 2007 #6
    I've found this site http://www.bioinfo.de/isb/2006/06/0024/" [Broken]
    they are talking about a program that generates random DNA sequence.
    And they mention some test that they have tried.

    This is a quote from site:

     
    Last edited by a moderator: May 2, 2017
  8. Jan 2, 2007 #7

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    These all seem to be valid procedures for determining randomness. And yes, my understanding is they are converting everything into 8-bit ASCII. Which makes it eminently suitable to apply the "runs test" that I described previously. Generally, see:

    http://en.wikipedia.org/wiki/Algorithmic_information_theory
    http://en.wikipedia.org/wiki/Algorithmically_random_sequence
    http://en.wikipedia.org/wiki/Normal_number

    Two caveats:
    1. The tests you've referenced may not be independent statistically.
    2. The term "test" which they use in the RANDNA description pages may or may not be synonymous with the term "test" in statistics. E.g., take the "pi" test (which uses the property that almost all real numbers are normal numbers [infinite random sequences]): it is not clear which statistical test they are using to test the computed pi against the true value of pi.
     
    Last edited: Jan 2, 2007
  9. Jan 2, 2007 #8
    So if converting char to 8 bit ascii fine, converting 4 DNA nucleotides for:
    A=00, C=01, G=10, T=11 legit as well?
    I ask if 2-bit representation can effect the test like "run", "serial"...
     
  10. Jan 2, 2007 #9

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    I don't see why 8 bits are necessary. Two bits will work as well. You can even use one bit to test for "up" vs. "down."
     
  11. Jan 3, 2007 #10
    I am afraid that wrong bit representation will result a lossy code and effect the randomness.

    i.e. sequence1 : AC = "0001" and sequence2: AT = "0011" should have same randomness but applying test on them will return a different result.
    Am I right?
     
  12. Jan 3, 2007 #11

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    My guess is they are using 8 bits to take advantage of existing (pre-programmed) "data mining" (or data analysis) software procedures. Otherwise, why stop at 8? Why not use a 16, 32, or 64, or 256 bit representation? In my humble opinion, the choice of bits is itself a random rule, so it shouldn't matter. But, to be "safe," you may code your data in several alternative ways (e.g. 2-bit vs. 8-bit) then apply the same test. If the DNA segment you are using is sufficiently long, my guess is that the coding will not make a qualitative difference. Actually, somewhere there may be a definition of "necessary and sufficient bit representation," in that representation of 4 categories in 8 bits is a waste of space, whereas 2 bits are necessary and sufficient.
     
  13. Jan 3, 2007 #12
    My main concern in this point is creating lossless bit representation.
    Anyway, I'll try the tests using 2-bit.

    Meanwhile are there more interesting (and good) test that you can recommend?
     
  14. Jan 3, 2007 #13

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Actually a runs test will produce the exact result for both (2 runs in each, assuming the endpoints count as runs -- you should check this). But AC = 0001 (1 run) and AG = 0010 (3 runs), so I see your point. In 8-bit representation A = "00000001" C = "00000011" and T = "00010100" (I think), so AC = "0000000100000011" and AT = "0000000100010100"; and the runs test will return 4 for the first and 7 for the second.
     
    Last edited: Jan 3, 2007
  15. Jan 3, 2007 #14

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    You could calculate the regression equation xt = a + b xt-1 + ut, where a and b are the regression coefficients and u is a stochastic disturbance with expected value = 0 (Eu = 0), and the subscript t denotes t'th place (bit) in the sequence (e.g. X = 0000101010111...). Your dependent variable xt is binary, so technically you should use a limited dependent variable model such as Probit or Logit; although the linear model is always a good starting point. If you cannot reject the null hypothesis b = 0, then you can conclude that xt is linearly uncorrelated with xt-1 -- which is similar to ENT's correlation test.

    Generalizing, you can run the regression xt = a + b1xt-1 + ... + bkxt-k + ut (ideally using one of the limited-dep. var. techniques), then execute the joint test b1 = ... = bk = 0. This is a generalization of ENT's correlation test, in effect.

    You can also use, for example, a multinomial logit regression package to calculate the statistical relation between Zt coded as, say, {A,C,G,T} = {1,2,3,4} and Zt-j coded similarly (for j = 1, ..., k); so you don't have to convert letters to bits.
     
    Last edited: Jan 4, 2007
  16. Jan 3, 2007 #15

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Small technical edit to previous post:
    The index for Z (data coded as letters) is different from the index for x (data coded as bits).

    A further idea is to make use of the well-known normality tests by using a probability theorem which says that if R is a random variable uniformly distributed on [0, 1] and F is a probability distribution function then the random variable Y = F-1(R) is distributed (has distribution) F. In effect, you can use a DNA sequence as a uniform preudorandom number generator by applying to it the inverse of the normal distribution, then test whether the resulting data (the Y's) are indeed normal. Alternative approaches to generating Y are described here.

    Again, since your uniform distribution is discrete, you may have to use a piecewise randomization algorithm that would enable you to expand from the 4 DNA elements to the [0, 1] continuum. For example, you could assign A = 1/8, C = 3/8, G = 5/8, T = 7/8. When you observe an A, you use a software (e.g. Excel) to generate a random number (R) between 0 and 2/8. For a C, generate a ran. var. b/w 2/8 and 4/8. For G, b/w 4/8 and 6/8; for T, 6/8 and 1. Then apply the inverse-F transformation, then test the new ran. var. Y for normality. Even better (but more time consuming) is to design a fractal structure (interval tree) whereby you map the the first 4 letters in a DNA into 5 equal subintervals of the [0, 1], then map the next 16 letters into 25 equal sub-subintervals, and so on, in a way that is graphically reminiscent of constructing a Cantor set (except in your case you will not be deleting any of the subintervals, but further dividing each subinterval into smaller and smaller segments). By dividing the [0, 1] interval into consecutively smaller subintervals, you can "fill in" the [0, 1] continuum to an arbitrary density by using progressively expanding blocks of letters in a DNA sequence, then run the inverse-F normality test.
     
    Last edited: Jan 4, 2007
  17. Jan 3, 2007 #16

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Another idea is to use the "equality of distributions" (e.o.d.) tests (Run [or Runs] test [apparently a.k.a. Wald-Wolfowitz test], Median test [may be a.k.a. Mann-Whithey U test], Rank-sum test) (Mood, Graybill, Boes. Intro. to the Theory of Stat., 3rd Ed.) by testing the hypothesis that the distribution of a DNA sequence is equal to a uniform distribution produced by a random number generator. First, you need to generate a random sequence of 0's and 1's (or alternatively A's, C's, G's and T's) by using a standard random number generator. This will be your benchmark sequence. Then you can use any of the e.o.d. tests to test the hypothesis that the distribution of an actual DNA sequence is not different from the benchmark distribution. The advantage of this approach is that as long as you can find a way to generate random letters A, C, G, T, you do not need to convert your DNA into bits (although you may have to rank them, e.g. A=1, C=2, G=3, T=4).
     
    Last edited: Jan 4, 2007
  18. Jan 5, 2007 #17

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Last edited by a moderator: May 2, 2017
  19. Jan 6, 2007 #18
    Thank you.
     
  20. Jan 7, 2007 #19

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Any DNA sequence is composed of four characters, so it can be thought of a sequence of base-4 numbers. Let A=0, C=1, G=2, T=3. Then GATTAC = 203301. In base 10, this is equal to 2289: http://members.aol.com/SciRealm/SciCalculator.html [Broken] But you could also express this as {GAT, TAC} = {203, 301} = {35, 49} or as {GA, TT, AC} = {20, 33, 01} = {8, 15, 1}.

    I am now thinking that the way I have advised to construct the "inverse-F" normality test may introduce randomness to a potentially nonrandom sequence, which will bias the test result toward "false randomness."
     
    Last edited by a moderator: May 2, 2017
  21. Jan 7, 2007 #20
    Fine but this kind of representation(base-4 or base-10) can help me with only specific tests.

    Still I don't know what should I choose for my problem from your great suggestions and other test I've read about.
     
  22. Jan 7, 2007 #21
    p.s.
    Diehard suite that you mentioned accepts as input binary sequence as well.
     
  23. Jan 7, 2007 #22

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Since you prefer a non-binary representation, you should definitely apply the chi-square test.

    Then go back to post #16 above and see which of the tests I mentioned there require binary coding, and which do not. You can use those that do not require a binary coding, for example the Wilcoxon signed-rank test.

    You could use a regression model to estimate j'th order autocorrelations and test that each is zero (individually and jointly) using the "base 4" coding. To repeat, you can calculate the statistical relation between Zs coded as, say, {A,C,G,T} = {0,1,2,3} and Zs-j coded similarly (for j = 1, ..., m): Zs = a + b1Zs-1 + ... + bmZs-m + us then test b1 = ... = bm = 0. You should start with linear regression (which works best with a continuous dependent [left-hand-side] variable); there are ready-made statistical software specifically written for "detecting j'th order autocorrelation," which is what you will be estimating. Later, you can use a multinomial logit package for "technically correct" estimation (in MNL, the dependent variable is multi-valued discrete).
     
    Last edited: Jan 8, 2007
  24. Jan 8, 2007 #23

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Also, I think you can use the "inverse-F" normality tests, or a uniformity test.

    The order in which you group the letters to form base-4 numbers is not important. In the previous example, {v1, v2, v3} = {GA, TT, AC} = {204, 334, 014} and {w1, w2} = {GAT, TAC} = {2034, 3014}. In base 4, w1 = v1 * 10 + Integer(v2/10) and w2 = Decimal(v2/10) * 10 * 100 + v3. So each w is a deterministic transformation of the v's. If the v's are random, so are the w's, and vice versa. Which implies if the v's are nonrandom then the w's are nonrandom, too.

    Let's say you decided on a length-6 string to represent a base-4 number, e.g. GATTAC. The largest base 4 number with 6 digits is 333333, which is 4095 in base 10 (4095 = 46 - 1, which isn't a coincidence). That means you can divide the [0, 1] interval into 4096 = 212 subintervals, each 1/4096 = 0.000244140625 long. For string "GATTAC" = 2033014 = 2289, you'd mark 2289/4096 = 0.558837890625 (check this), near the midpoint of the [0, 1] interval. To use the normality tests, you should apply the inverse normal transformation and solve P(normalrandom < y) = 0.558837890625 for y = 0.148024. That is, StandardNormal(y) = (1 + Erf(y/sqrt(2)))/2 = 0.558837890625 has root y = 0.148024.

    If the DNA is 240 letters long, then you will have 240/6 = 40 numbers Z4 = (Z14 ... Z404) in base-4. Then:
    1. convert Z4 into base-10 Z10 = (Z110 ... Z4010),
    2. convert Z10 into uniform random variables U = (U1 ... U40) = (Z110/4096 ... Z4010/4096),
    3. transform U into normal random variables Y = (Y1 ... Y40) by solving StandardNormal(Y) = U or Y = StandardNormal-1(U)
    4. apply these normality tests to Y.

    You can also directly test for uniformity of U using this test.
     
    Last edited: Jan 8, 2007
  25. Jan 9, 2007 #24
    Well today I've talked with an expert in this field (prof. in bioinformatics).
    First he told me that it's not a good idea to convert DNA letters to binary representation cause test like RUN test won't work...

    He recommended to use following tests Frequency,Frequency in Blocks, Run test, Coupon test, some template-based tests, and Autocorrelation.

    I will appreciate if you can explain how to apply the last method and normalize it.

    p.s. keep in mind that our "test subject" relatively short, about 300-400 letters.
     
  26. Jan 9, 2007 #25

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Most "statistical" software (including Excel) have a correlation function that returns the Pearson partial correlation coefficient between two vectors of data. Suppose you organize your DNA sequence as a column of numbers on an Excel spreadsheet:

    2
    0
    3
    3
    0
    1
    etc.

    Let's say you entered the entire DNA sequence in column A, rows 1 through 240 (range A1:A240 in Excel notation).

    Then you add another column. On row r of the second column (column B), copy (point to) the value on row r-1 of the first column (column A). Thus the first row on column B will be blank (not zero). The second value on column B will be equal to the first value on column A. The third value on column B will be equal to the second value on column A. And so on. The easiest way to do this is to go to the second row on Column B (cell B2) and write the expression:

    =A1

    then hit the "Enter" key. Then copy cell B2 and paste it over the cell range B3 through B240.

    Then use the CORREL function like this: Go to a new blank cell and write the following expression:

    =CORREL(A2:A240,B2:B240)

    and hit "enter". This will produce the correlation coefficient with the DNA sequence and its once-lagged value, which is technically known as the first-order autocorrelation coefficient.

    A shortcut is to have a single column (column A), then use:

    =CORREL(A2:A240,A1:A239)

    You can use this setup to calculate any arbitrary order of autocorrelation. For the 10th-order AC, the shortcut is

    =CORREL(A11:A240,A1:A230)

    There are two downsides to the Excel approach:
    1. CORREL function doesn't test for statistical significance,
    2. You can only calculate partial correlations (one at a time), rather than an autocorrelation structure (a number of lags jointly, a.k.a. "distributed lags").

    Excel's Analysis ToolPak Add-In will let you calculate partial correlations between pairs of an arbitrary number of variables (columns or ranges); but each of the two downsides would still apply. (Note: to use the Analysis ToolPak Add-In, you have to specify [or input] each lag as a separate column -- you cannot use the shortcut described above).

    You can use a "serious" statistical software like SAS to compute partial correlation coefficients and test the statistical significance of each (individually). But downside #2 would still apply. The most efficient way to eliminate both downsides is to use multiple regression analysis, which you can do in Excel using the Analysis ToolPak Add-In.

    See #22 above for an explanation of how to estimate the first through the m'th order autocorrelations jointly using the multiple regression technique. Again, the main advantage of this approach is that it lets you to calculate and test a number of autocorrelations (lags) jointly, rather than individually.

    To estimate the first order AC, all you need is to estimate the regression equation Zs = a + b1Zs-1 + us. To estimate the first and the second order AC's jointly, you need to estimate Zs = a + b1Zs-1 + b2Zs-2 + us. Etc. The data structure should be "column A, column B, etc.," where column A holds the original sequence and each subsequent column holds the once-lagged value of the previous column. Thus, column K will hold the 10 times-lagged sequence.

    To test each lag individually, look at the t-statistic of the b parameter corresponding to the lag (e.g. t-stat of bj for the j'th lag.) To test all lags jointly, look at the "regression F" statistic in the ANOVA table in Excel output. If you have a single independent (right-hand side) variable (as the x in y = a + b x), then Regression F = (t-stat of b)2.

    See attached Excel printout (into PDF). A random "DNA" sequence was regressed on distributed lags 1 through 10; none of the coefficients is statistically significant (t-stat > 2 approximately, or P-value < 0.05); neither are they jointly significant ("Significance F" is nowhere near 0.05). (I am using a 5% probability threshold for statistical significance.) Page 1 is the regression output; pages 2-5 are the data (column A is the dependent variable or the "Y variable", columns B-K are the indep. vars. or the "X variables." Rows 1-11 are not included in the regression). View attachment 8791
     

    Attached Files:

    Last edited: Jan 10, 2007
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook