Pairwise correlation of signals

Click For Summary
The discussion focuses on analyzing the relationship between a positive output signal and multiple input signals through pairwise correlation. Initially, Pearson's correlation was used, but its limitations in capturing non-linear relationships led to the exploration of distance correlation, which proved more effective. However, there are concerns about combining distance correlation with Pearson's coefficient to determine the sign of the correlation, as it may yield misleading results due to random fluctuations. Ultimately, the consensus leans towards relying solely on distance correlation for its robustness in this context. This approach aims to accurately identify how variations in input signals influence the output signal.
serbring
Messages
267
Reaction score
2
Hi all,

On a vehicle I recorded an output signal that is positive and it's variability is lead to the 30 input signals, but not all together at the same insant. Just by checking the pairwise correlation between signals in a time periond, I'm able to detect which input signals lead to the variability of the output signal. At first I tried with Pearson's correlation, but the correlation might be non linear and not always the two signals change monotically. For these reasons this correlation coefficient is not very helpful. Then, I tried with distance correlation and it works very well, but I miss the sign of the correlation that is really important to me. So what about using distance correlation parameter and the sign of Pearson's coefficition to detect the sign of the correlation (i.e. negative or positive correlation)? Thus I may be able to detect if an increase or a decrease of an input signal lead to an increase of the output signal. Any comment is appreciated.

Thanks
 
I don't think that gives a useful result. Consider cases where the distance correlation is large but the classical correlation is close to zero: the sign gets determined by random fluctuation, and you end up with either a large positive or a large negative value just by chance.

You can consider both values separately, if that helps, but mixing them that way leads to confusing results.
 
Hi,

thanks for your reply, you're on right. I'll just use the distance correlation approach, I have seen that it is much more robust than Pearson's coefficient in my case.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 21 ·
Replies
21
Views
4K