Performing Statistics on Neuro-electrophysiology Data

  • Thread starter Thread starter minimal
  • Start date Start date
  • Tags Tags
    Data Statistics
AI Thread Summary
The discussion centers on the statistical analysis of long-term potentiation (LTP) data from hippocampal brain slices, specifically regarding the appropriateness of using a T-test to compare two groups at specific time points. Participants highlight that the data may not follow a normal distribution, suggesting that methods like evaluating the rate parameter of exponential distributions could be more suitable. Concerns are raised about the potential misleading nature of the T-test if the underlying processes differ in parameters such as delay times. The importance of clearly defining the analysis goals is emphasized to select the appropriate statistical methods. Overall, the conversation underscores the need for careful consideration of statistical approaches in neuro-electrophysiology research.
minimal
Messages
11
Reaction score
0
Hi all, I am working with long term potentiation (LTP) induction in hippocampal brain slices.

I am wondering about how exactly to statistically analyze the data I am presented with.

It looks like [URL]http://hmg.oxfordjournals.org/content/19/4/634/F5.small.gif[/URL]

Where each line of points represents one sample subject (control, and sample with the target property we're investigating). My question is, is that my boss thinks it's acceptable to just pick a set of time points, let's say the 3-3.5 hr, and compare that with the two groups in a T-Test. I wasn't sure about that, but that seems to be the norm. I was thinking some sort of regression or something but again I'm not sure. can anyone shed some light on what sort of tests we might be able to use for this?

To elaborate on the test, we stimulate slices with current, and we measure their response, and depending upon the strength of the returning signal, more or less LTP has been achieved. So for instance, if we're investigating the gene knockout of a particular sample and whether or not that limits LTP, that might be the lower set of points
 
Last edited by a moderator:
Physics news on Phys.org
minimal said:
Hi all, I am working with long term potentiation (LTP) induction in hippocampal brain slices.

I am wondering about how exactly to statistically analyze the data I am presented with.

It looks like http://hmg.oxfordjournals.org/content/19/4/634/F5.small.gif

Where each line of points represents one sample subject (control, and sample with the target property we're investigating). My question is, is that my boss thinks it's acceptable to just pick a set of time points, let's say the 3-3.5 hr, and compare that with the two groups in a T-Test. I wasn't sure about that, but that seems to be the norm. I was thinking some sort of regression or something but again I'm not sure. can anyone shed some light on what sort of tests we might be able to use for this?

To elaborate on the test, we stimulate slices with current, and we measure their response, and depending upon the strength of the returning signal, more or less LTP has been achieved. So for instance, if we're investigating the gene knockout of a particular sample and whether or not that limits LTP, that might be the lower set of points

These are exponential distributions. The usual Z test and the t tests assume normal distributions.

Here you want to evaluate the difference in the rate parameter \lambda of these two distributions. The means are calculated in the usual way:

\bar x = \frac{1}{n}\sum_{i=1}^{n} x_i

The rate parameter estimate of \lambda is 1/\bar x. You can then calculate the 95% confidence intervals around the estimates of \lambda (using Z=1.96) for each curve and see if they overlap. If they do not, they meet the usual standard of a significant difference.

The lower CL=est\lambda (1-1.96/\sqrt n)
The upper CL=est\lambda (1+1.96/\sqrt n)

Where N is the number of data points defining the plots in each of the two sets.
 
Last edited:
minimal said:
Where each line of points represents one sample subject (control, and sample with the target property we're investigating).

I can't tell from that small picture what you mean by "a line of points". Can you explain the format of your data? What are the units of EPSP on the y-axis? voltage? Are we seeing data for two subjects? Does each subject have voltage vs time data?
 
Stephen Tashi said:
I can't tell from that small picture what you mean by "a line of points". Can you explain the format of your data? What are the units of EPSP on the y-axis? voltage? Are we seeing data for two subjects? Does each subject have voltage vs time data?

Sorry, by line of points (when you repeat it, it sounds so professional) - I meant the line that you would use to connect all the points of the one subject (or in this case, brain slice). So yes, we are seeing data for two subjects.
EPSP = Excitatory Post Synaptic Potential. This can be measured in current or voltage. The way LTP measurements work, is that you have a stimulating electrode, a reference electrode (for baseline) and a recording electrode. Let's say you stimulate the axons of a certain field of neurons (let's say this is the schaffer collateral in the hippocampus. That way, the neurons all project in the same way, so it's like hairs all standing up or something, and you're not hitting the roots of a bundle, you're hitting the hairs towards the end of a bundle. Then, you measure the response - where the recording electrode is, and it's in a cell body layer, let's say CA1 of the hippocampus (this would be like the hair tips initially electrocuted hitting the roots of someone elses head). The reference electrode measures the collective action potentials produced by your stimulating electrode. The higher the response, the more LTP has taken place. AFAIK It does not matter whether you measure in current or voltage, because they can be converted to each other either way, so you just label the Y axis something like fEPSP (field excitatory post synaptic potential - remember we did it with a bundle of hairs instead of just one) and the fEPSP is going to be the same whether it's current or voltage.
Is that a good explanation? Thanks for your responses!
 
That an effort, but it does not state the format of the data clearly. I can imagine different formats of voltage measurements for a single subject that involve time.

Does or does-not the graph show data for 2 "subjects"? Or does it show data for 100 subjects?

Is the graph for one subject a graph of voltage vs time? Or is the graph for one subject a graph of the jump between two voltage readings taken almost simultaneously, both at a place in time? Or is the graph a graph of voltage versus that voltage's duration in time?
 
As I said in the last response :) You are seeing data from two subjects (1 subject black dots, 1 subject white dots). The graph is the EPSP response vs time. That EPSP response is in voltage. It basically means the strength of the neural field response.
 
Last edited:
minimal said:
As I said in the last response :) You are seeing data from two subjects. The graph is the EPSP response vs time.

OK, then this is my opinion: The task is to compare two random processes (not to compare two batches of independent samples drawn from two distributions). Unless you make specific assumptions about the processes, you can't rigorously justify any particular method. However, you boss's use of the t-test looks reasonable under certain assumptions. Assume the random process has one parameter that governs the "decay" of the mean value of the voltage and that this parameter varies slowly over time. Assume the jumpy-ness of the voltage around the mean value can be modeled by independent draws from some other random variable with mean zero and a variance that changes slowly with time. Then if you pick a small time interval, the sample of voltages from it is roughly a sample of independent draws from the variable that controls the jumpy-ness. Thus you are comparing the effect of the parameter than controls the mean value in the time interval.

What phenomena might make the bosses method misleading? Suppose the random process has other parametes. For example, suppose it has a parameter that controls the "delay time" from when the initial stimulus (or voltage or whatever) is applied to the subject to the onset of the decay. If you are trying to compare the parameters that control the average values of the processes and the "delay times" are different then the processes are, in a manner of speaking, out of sync. So the "out of sync" effect combines with "different average value effect at time t after onset" effect. If you don't care which parameters make the process different, the boss's method is OK. If you are trying to justify a claim only about the "different average value effect at time t after onset", there is a problem.

What statistics to use always depends on what you are trying to accomplish. You have to have a precise definition of your goal in order to mathematically select and justify a particular method.
 
Back
Top