Recent content by pawelch

  1. P

    [KL-divergence] comparison of pdf's

    I always have problems to put it in right words.. However, what I meant was, since we know Aest and Aerr we want to find out whether an observed sequence Su, has been generated by using either Aest or Aerr. We can put it this way P(S_u,S_{observable}|A_{err},\pi) is a marginal pmf of an...
  2. P

    [KL-divergence] comparison of pdf's

    Following what you suggested I was trying to translate my problem into HMM. Suppose, that I ran multiple times (say 1,000,000) agents of type \{S\}_1 (these are the clever ones) on A, in order to estimate the transition matrix based on their behavior A_{est}. Obviously, they do not follow A...
  3. P

    [KL-divergence] comparison of pdf's

    I have been told that HMM might be of use here.. However, I find it hard to come up with b_j(k) for my model. However, thank you for your commitment. your remarks were really helpful throughout! and I'll just hang around with a thought that higher education is for chosen ones :)
  4. P

    [KL-divergence] comparison of pdf's

    You are right here again, I put it in wrong words. In general \{S\}_2 are agents who made mistakes and errors and I would like to see when they make errors (I do not care about state order,but only about the deviatiom from \pi, because as long as they make errors, I need to fix them). Normaly...
  5. P

    [KL-divergence] comparison of pdf's

    Hmm could you please tell me why it is so? You might think of them as time series that generate states according to A. The only difference between them is the number of occurence of low-frequency or high-frequency events from \Omega. Because both sequences can generate arbitrary numbers, I...
  6. P

    [KL-divergence] comparison of pdf's

    Ok let's me put it this way. suppose there is a transition matrix A, that defines probability of transition between states. That matrix has special properties, and in my model after some simplifications, is actualy an ergodic Markov chain, having \pi as the limiting probability vector. Thus in...
  7. P

    [KL-divergence] comparison of pdf's

    Hmm... maybe you are right here, I am not saying you are not because I am not an expert for sure.. According to wikipedia Does it mean that my expected {\pi} I have calculated from past data, is actually Q, and as you are saying {\pi}\log\frac{0}{\pi}? If that is the case, then you are...
  8. P

    [KL-divergence] comparison of pdf's

    Maybe I have not defined it well, maybe your are right. However, if you have a look at the definition I have posted, there is \pi that corresponds to some expected frequency - and its components are positive. Also \pi is our true distribution, thus in this sense it is reasonable that we observe...
  9. P

    [KL-divergence] comparison of pdf's

    Hi pmsrw3, Thank you for your quick answer That is what I thought as well. However, we think of P as a "true" distribution, in my case that would be \pi. According to people form http://www-nlp.stanford.edu/fsnlp/mathfound/fsnlp-slides-kl.pdf" when there is 0 occurrence, then it is...
  10. P

    [KL-divergence] comparison of pdf's

    Hi all, I am trying to devise a mathematical model for my project I am working at. Description is as follows: we have a sample space \Omega=\{w_1,w_2,\cdots, w_N\} It is very large. Suppose further, that we have some assumption of frequency of occurrence of each w_i , stored in...
  11. P

    Game theory, mixed strategies

    Homework Statement I am trying to study a mixed-strategy phenomenon form the book: Game Theory Evolving:A Problem-Centered Introduction to Modeling Strategic Interaction (Second Edition) by Herbert Gintis. There is an example (The Prisoner's Dilemma) which looks as follows...
Back
Top