Recent content by divB

  1. D

    Exponential integral over removeable singularity gives wrong result

    Hi Incnis, Yes, j is the imaginary unit. And yes, it contains some sort of the sinc function. To be precise, I want to integrate parts of the sinc function (this is where my formula comes from in the first place). I still have no idea why and where I get my mismatch from :( :( divB
  2. D

    Weighted Least Squares for coefficients

    And how could I make it care?
  3. D

    Exponential integral over removeable singularity gives wrong result

    Hi, I am struggling for some time to solve the following integral: $$ \int_{-n}^{N-n} \left( \frac{e^{-j\pi(\alpha-1)\tau}}{\tau} - \frac{e^{-j\pi(\alpha+1)\tau}}{\tau} \right) d\tau $$ N is a positive integer, n is an integer, \alpha can be a negative or positive rational number. I want to...
  4. D

    Weighted Least Squares for coefficients

    Am I? I am not sure ... at least I do not see how. Can you provide me with an idea how this relates to the described setup and how to obtain the correlation matrix then? The one thing I know is that for my application, some coefficients are more important for me - I do not know how they are...
  5. D

    Weighted Least Squares for coefficients

    Ok, it is a Volterra series. So it is linear in its coefficients but a non-linear system. But I think it does not matter. Anyway, since for a practical system the coefficients decay very fast, lower-order terms dominate the total error. But if you are interested in certain non-linear behavior...
  6. D

    Weighted Least Squares for coefficients

    But there are cases where it makes sense. Not everything is a linear system. In my case I clearly see that perturbing coefficients with the same noise gives different results, depending on which I perturb. So some are more important than others. It's difficult to explain but I tried to explain...
  7. D

    Weighted Least Squares for coefficients

    Hi, I have an ordinary least squares setup y = Ac where A is an NxM (N>>M) matrix, c the unknown coefficients and y the measurements. Now WEIGHTED least squares allows to weight the MEASUREMENTS if, for example, some measurements are more important or contain a lower variance. However...
  8. D

    Positive, negative, complex determinants

    You misinterpreted my sentence, see my first reply. I found the issue already: Indeed, it must always be positive. The matric I am using is A^#*A which is positive definite by definition. However, MATLAB function det() uses a very sub-optimal algorithm based on LU decomposition. I get a...
  9. D

    Positive, negative, complex determinants

    This is probably a bad reference because the first sentence is already "A determinant is a real number associated with every square matrix" - obviously wrong. Anyways, this sentence was not at all important and you misinterpreted it: I meant I am familiar with the fact, i.e., so far I had only...
  10. D

    Positive, negative, complex determinants

    Hi, I have a rather trivial question but google did not really help me. So far I was always familiar with the fact that the determinant of a square matrix is positive. But it is not. When I randomly execute det(randn(12)) in MATLAB I get a negative determinant every couple of trials...
  11. D

    Derivative of p-fold convolution

    Ok I think I got it. Is this correct? (I was not able to do simple differentiation) \frac{\partial}{\partial Y(\omega)} Y(\omega)*Y(\omega) = \frac{\partial}{\partial Y(\omega)} \int_{-\infty}^{\infty} Y(\tau) Y(\omega-\tau) d\tau = \\ \int_{-\infty}^{\infty} Y(\tau) \frac{\partial...
  12. D

    Derivative of p-fold convolution

    Hi, What is the derivative of a p-fold convolution? \frac{\partial}{\partial Y(\omega) } \underbrace{Y(\omega) * \dots * Y(\omega)}_{p-\text{times}} EDIT: I have two contradicting approaches - I guess both are wrong ;-) As a simple case, take the 2-fold convolution. FIRST approach...
  13. D

    (empirical) relation between MSE and condition number

    Hi, It is a well known fact that in an inverse linear problem low condition numbers have low noise amplification and therefore decrease the error. So I wanted to test this: I draw random (skinny) matrices A, calculate y=A*c where c is a known coefficient vector, add some noise and...
  14. D

    Row selection of matrix and the condition number

    Thank you. And does this also hold for overdetermined systems? Is it better have the columns or the rows as orthogonal as possible in an overdetermined system? Or asked differently: Suppose I want to give a good example of a stable 10x5 equation system. How would I choose the matrix? Of...
  15. D

    Row selection of matrix and the condition number

    May I push this thread? In particular, is this a correct statement? First of all, shouldn't this mean columns? What is the interpretation of rows and columns in this sense? Second, it sounds a little bit counter-intuitive: Shouldn't only the first n measurements (n being the #columns=unknowns)...
Back
Top