Error on biological assay results

In summary, the conversation is about a question regarding the error on certain biological results, specifically in regards to the calculation of standard error on the parameters of a non-linear fit of sigmoidal data. The poster also discusses the issue of not having an indication of standard error on the data, which can lead to incorrect interpretations and decisions. Two questions are posed, one about calculating standard error for each tested molecule and the other about calculating the standard deviation of the assay itself from repeated measurements. The poster apologizes for the length of the question and mentions a previous laboratory experiment with similar sigmoidal plots.
  • #1
lavoisier
177
24
Hi everyone, I have a question about the error on certain biological results.
Apologies for the long explanation, but it's not a trivial thing (at least, it isn't for me).

This is what happens:
1. biologists take N solutions of a molecule at different concentrations (say from 10-10 to 10-5 molar) and add them to 2 N equal 'wells' (there is a duplicate for each concentration)
2. the biological 'response' R is measured in each well, normalised (e.g. %R = (R-Rmin)/(Rmax-Rmin) ) and %R is plotted versus the log10(concentration).
3. the data are fitted non-linearly using a variation of the Hill equation ( http://en.wikipedia.org/wiki/Hill_equation_(biochemistry) ); it's a 'sigmoidal' curve that seems quite closely related to the logistic curve. I think the least-squares fit is used but I'm not 100% sure.
4. the parameters that are returned are the so-called 'potency', which mathematically corresponds to the abscissa of the inflection point (KA in the Wiki article), the Hill slope (n) and the maximal %R (because Rmax is often defined based on a standard, independently from the individual molecule tested, so the asymptotic %R at the right plateau of the curve can often be different from 100%, greater or smaller).

Trouble is, these data are very often given without any indication of the standard error on them.
This may sound like a minor issue, but from my point of view (the chemist who made the molecules and wants to know how good they are) it's not.
Because for instance one day I can get a very good potency for a certain molecule, I make decisions on what to do next, and when the molecule is retested for confirmation after a few weeks, I find that the potency has become much worse, and my plans were all wrong.
If I had been given an idea of the error, I could have decided what confidence to put on each result, and crucially, how 'different' two potencies actually were.

So, suppose you have the data.
Let xi=log10(conci): you have 2 N pairs [x1, %R1_1], [x1, %R1_2], [x2, %R2_1], etc.
You fit them non-linearly to an equation like: %R = %Rmax / (1+ 10(K-x)) .

First question. In one run of the assay M molecules are tested, generating 2 M N data points. Is it possible to calculate the standard error on K and %Rmax for each tested molecule? And if so, should this be done based on the data for each individual molecule, or after pooling together the duplicate variability of all the M molecules?
Second (and last) question. The assay is often repeated multiple times on certain molecules. So in the end there will be many sets of [K, %Rmax] data for each molecule. Can the 'standard deviation' or the assay itself be calculated from these data, as if they were direct, repeated measurements?

Again, sorry for the long post. Not a question that's easy to ask briefly.

Thank you!
L
 
Mathematics news on Phys.org
  • #2
lavoisier said:
Hi everyone, I have a question about the error on certain biological results.
Apologies for the long explanation, but it's not a trivial thing (at least, it isn't for me).

This is what happens:
1. biologists take N solutions of a molecule at different concentrations (say from 10-10 to 10-5 molar) and add them to 2 N equal 'wells' (there is a duplicate for each concentration)
2. the biological 'response' R is measured in each well, normalised (e.g. %R = (R-Rmin)/(Rmax-Rmin) ) and %R is plotted versus the log10(concentration).
3. the data are fitted non-linearly using a variation of the Hill equation ( http://en.wikipedia.org/wiki/Hill_equation_(biochemistry) ); it's a 'sigmoidal' curve that seems quite closely related to the logistic curve. I think the least-squares fit is used but I'm not 100% sure.
4. the parameters that are returned are the so-called 'potency', which mathematically corresponds to the abscissa of the inflection point (KA in the Wiki article), the Hill slope (n) and the maximal %R (because Rmax is often defined based on a standard, independently from the individual molecule tested, so the asymptotic %R at the right plateau of the curve can often be different from 100%, greater or smaller).

Trouble is, these data are very often given without any indication of the standard error on them.
This may sound like a minor issue, but from my point of view (the chemist who made the molecules and wants to know how good they are) it's not.
Because for instance one day I can get a very good potency for a certain molecule, I make decisions on what to do next, and when the molecule is retested for confirmation after a few weeks, I find that the potency has become much worse, and my plans were all wrong.
If I had been given an idea of the error, I could have decided what confidence to put on each result, and crucially, how 'different' two potencies actually were.

So, suppose you have the data.
Let xi=log10(conci): you have 2 N pairs [x1, %R1_1], [x1, %R1_2], [x2, %R2_1], etc.
You fit them non-linearly to an equation like: %R = %Rmax / (1+ 10(K-x)) .

First question. In one run of the assay M molecules are tested, generating 2 M N data points. Is it possible to calculate the standard error on K and %Rmax for each tested molecule? And if so, should this be done based on the data for each individual molecule, or after pooling together the duplicate variability of all the M molecules?
Second (and last) question. The assay is often repeated multiple times on certain molecules. So in the end there will be many sets of [K, %Rmax] data for each molecule. Can the 'standard deviation' or the assay itself be calculated from these data, as if they were direct, repeated measurements?

Again, sorry for the long post. Not a question that's easy to ask briefly.

Thank you!
L

When I taught we used to do a laboratory experiment looking at the helix-coil transition of a synthetic peptide. It gave the same kind of sigmoidal plot. To do this justice, you should be doing some kind of realistic fitting. You will have uncertainties on Rmin, Rmax as well as the other fitting parameters. Look in Press, Teukolsky, et al. "Numerical Recipes in *" for examples of ways of doing non-linear least squares.

* There are many versions of the book, depending upon the computer language that you would like: FORTRAN, Pascal, C, ...
 
  • #3
The detailed description is helpful. Let's first try to fit it to a general mathematical description of the problem.

There are measurements of concentrations. I'll imagine the solutions are in different bottles. Is the biological response of samples from bottle S the sole quantity used to estimate the concentration of the solution in bottle S ? Or is there some independent measurement of the concentration of solution in bottle S?

The most reliable procedure for determining the distribution of various errors from a non-linear curve fitting technique is to use simulation. If you know computer programming, a Monte-Carlo simulation is a good idea.

As far as I can determine, software packages that use deterministic methods use linear approximations. For example, let the fitted curve be B(t) = f(t, P(X),Q(X)) where B(t) is the curve that is fitted to the data. P and Q are parameters that determine which one of a family of curves f(t,P,,Q) we use for the fit (like the P and Q in B(t) = Pt + Q for fitting a linear function). P and Q are variables in the sense that they are determined from the sample data X = {(x1,y1),(x2,y2),...}. A simple curve fitting technique has a way expressing P and Q as a function of the data, so I denote the the "parameters" P and Q as functions P(X), Q(X). (For example, when doing a least squares linear fit to data, there are functions that "solve" for P and Q as a function of the data values.)

For example, to estimate the variance of Q, we use a linear approximation to Q(X). If Q(X) is a reasonably simple formula, we do this by taking the differential of Q(X) with respect to all the variables (xi,yi). We regard Q and the (xi,yi) as a random variables. If we assume the (xi,yi) are independent random variables with known or estimated variances then the variance of Q is calculated as the sum of the variances of the terms in the linear approximation.

We'll have to think about how complicated fitting the Hill equation is. If it's very hard to express the parameters as functions of the data then I'd say we need a computer simulation or a software package whose outputs we trust.
 
  • #4
@Quantum Defect
Thank you for your suggestion. Unfortunately my programming experience is limited to Maxima, I don't know the languages you mentioned. I was advised to try R, but I didn't have the time so far. Maxima has got a non-linear fitting routine, I will see if that provides a way to calculate the error on the fitting parameters. However, mine was more a methodological doubt, i.e. whether the data from multiple molecules rather than from only one at a time should be used to determine the error on the parameters, and whether the error must be found from each single run of the assay or from the repeats of the assay (or both?).
@Stephen Tashi
You're right, there is uncertainty on the concentration, too, but we don't measure them. The various solutions are prepared by sequential dilution of a 'master' solution, and the dilution is done with very high precision, so the only error on the concentration of the N solutions used in the assay will depend on the error on the concentration of the master solution. The latter is usually supposed to be 10-2 molar. No routine measurement is done to determine what the actual concentration is, so basically we have to 'live with it' and assume that it is 10-2 molar. The curves will be shifted to the left or right (and consequently K will be lower or higher than the 'real' value) when the concentration is higher or lower than the assumed value.
In general, though, the error on the concentration is found to be much lower than the one from the biological response. In fact some time ago a large number of random samples of master solutions (of different molecules) were collected and the concentrations were measured by NMR. A ~gaussian distribution of concentrations was found, I can't remember the mean and SD, but basically most concentrations were within a very narrow range around the desired one.
So by repeating the assay with different lots of master solution, one is supposed to approach the real K.
 
  • #5
The fitted equation predicts the "biological response" R. Your concern is the "potency of a molecule". Are these quantities related in some definite way? (Are they the same thing?) Is it a specific mathematical relation or just a qualitative trend?
 
  • #6
Sorry, I now realize I didn't explain this very well, despite my long post.

Indeed, by fitting [x=log10(concentration),%R] data (for a single molecule) to an equation like %R = %Rmax / (1+ 10n (K-x)), we can predict what biological response %R a molecule will give, depending on its concentration in the assay well.
The equation is related to the theory of reversible, non-covalent binding of small molecules to large targets, mostly proteins, and is supposed to apply to all molecules. What distinguishes one molecule from another are the parameters n, %Rmax and K, which I guess are a bit like P and Q in your post.

K is related to what we call 'potency'. The smaller K, the lower the concentration that we need to reach to have 50% of the maximal biological effect. As it's not easy to reach high concentrations of a molecule in the body of an animal, and also in that case toxicity is more likely to occur, we want a small K. A molecule with smaller K than another is said to be more potent. It depends on the target, but 'good' values of K are usually between -10 and -8, typically a bit higher for cellular assays.
In fact I used K for simplicity of notation, but in medicinal chemistry it's more common to talk about pEC50 (=-K), i.e. -log10 of the concentration for which the observed effect is half of its maximal value %Rmax. You can see that indeed when K=x, %R=%Rmax/2.
%Rmax is also important, because it's usually better to have a large maximal value of biological response.
I'm not sure about n, I know biologists don't like either too small n's or too large ones; it seems that something around 1 is fine.

So, after we test M molecules, we will usually prefer the ones with low K (i.e. high pEC50) and high %Rmax.
We will look at the structures of the molecules that have the best combination of potency and max response, and make decisions on what molecules to make next.

And here's the reason for my question.
Suppose I have two molecules X and Y, the first with pEC50=5.5 and the second with pEC50=5, say with the same n and %Rmax.
How do I know whether these two molecules *really* have a different potency, or if 5 and 5.5 are in fact not significantly different?
From the little statistics I know, I would normally ask: what is the standard error on these two pEC50's? And then I would apply a z test (or something to that effect) to measure the significance of the difference.
But as I said, our pEC50 data are delivered to us by the biologists without any error: we are only given 5 and 5.5 (and we can access the original data points); so I was wondering if and how I could calculate the standard error myself and use it to decide whether to trust a given potency difference or not.
 
  • #7
Often when consulting about mathematics, the "client" has data and is focused on what the data says about predicting the quantity A and the errors in such predictions. He is concerned about quantity A because it affects his work with a quantity E or a decision E.

Since E is the "bottom line", it's wise to consider if the data can be used to quantify the errors in quantity E. Sometimes this is not possible because there is no mathematical model for how A affects E and developing one would take more effort than the project justifies.

Let's see if I understand your scenario. Solutions are tested (not by your own lab) producing data. The data are used to fit an equation that predicts a "biological response" R_A of some particular molecule "A" as a function R_A(c) of concentration c (from a particular batch or type of solution?). The fitted curve can be summarized by a single parameter K_A. You want to know about the distribution of errors in the curve R_A(c). Do you care about the error in R_A(c) at each possible value of c? Or is one particular range of values of c the most important?

Is knowing about the errors in R_A(c) the "bottom line" of the problem. Or is the bottom line to make a decision E about which of several different molecules should be further investigated. If we have curves for two different molecules R_A(c), R_B(c), is there a mathematical model for how you would decide which molecule was best to investigate, assuming R_A(c) and R_B(c) were precise ?

The errors in R_A(c) can be investigated in various ways depending on what data exists. Of course the idea situation is to have "true" values of R_A(c) vs predicted values. We could also investigate the errors in R_A(c) and K_A as function of an assumed distribution of errors in the concentration measurement. However, you indicate the errors in the concentration measurement are small. So unless the value of K_A is very sensitive to the concentration measurements, error in those measurements alone won't adequately explain errors in K_A.

If you only have data on your labs measured of various molecules X,Y,Z ...at particular concentrations c_x c_y, c_z,... versus the predictions from R_X(c_x), R_Y(c_y), R_Z(c_z) using the reported K_X, K_Y, K_Z... values then we can work with that to characterize the error in prediction for a "typical" molecule. Even without characterizing the errors between the "true value of K" and "the value of K as reported by others", you can still investigate the errors in "the value of K as reported by others' and "the value of K consistent wth your labs measurments".
 
  • #8
Apologies for disappearing. Too busy with other stuff.
I embedded my answers in the quote below.
Thanks
L

Stephen Tashi said:
Often when consulting about mathematics, the "client" has data and is focused on what the data says about predicting the quantity A and the errors in such predictions. He is concerned about quantity A because it affects his work with a quantity E or a decision E.

Since E is the "bottom line", it's wise to consider if the data can be used to quantify the errors in quantity E. Sometimes this is not possible because there is no mathematical model for how A affects E and developing one would take more effort than the project justifies.
This is a very valid point. Indeed, it would be ideal but probably too difficult to develop a theory on how the error in the determination of potencies affects our decisions. But maybe it's simpler than that for our practical purposes. See below.

Let's see if I understand your scenario. Solutions are tested (not by your own lab) producing data. The data are used to fit an equation that predicts a "biological response" R_A of some particular molecule "A" as a function R_A(c) of concentration c (from a particular batch or type of solution?). The fitted curve can be summarized by a single parameter K_A. You want to know about the distribution of errors in the curve R_A(c). Do you care about the error in R_A(c) at each possible value of c? Or is one particular range of values of c the most important?

Is knowing about the errors in R_A(c) the "bottom line" of the problem. Or is the bottom line to make a decision E about which of several different molecules should be further investigated. If we have curves for two different molecules R_A(c), R_B(c), is there a mathematical model for how you would decide which molecule was best to investigate, assuming R_A(c) and R_B(c) were precise ?
The bottom line is E, but this is closely related to finding the error on the parameters of R_A(c).
The fitted curve R_A(c) is completely defined by 4 parameters: K_A (related to the 'potency' of molecule A, as explained above), n_A (the Hill slope), Rmax_A (the maximal response that can be obtained by molecule A) and Rmin_A (the minimal response, usually not very important). These 4 parameters can be reduced to 3 by calculating a % response, which eliminates Rmin_A.

Chemists are normally only concerned with 1 or 2 of these parameters: certainly K_A, and sometimes Rmax_A (depends what type of assay it is), not about the errors on the response itself, because those are believed to be a natural consequence of the assay setup. When translating the in vitro results from these assays to an actual living being to whom the drug must be administered, what counts is indeed the potency and max response.
The mathematical model used to go from the knowledge of these parameters to the decision E is just a simple ranking - remember it's chemists who do this :O).
If K_A < K_B (and in some cases we also look if Rmax_A > Rmax_B), we 'like' molecule A more than B, and will probably decide to make other molecules C, D etc that 'resemble' A rather than B. It's a simplification to some extent, but basically that's what we do.
And here's the problem that started all this: the variability in the biological response R often strongly affects the determination of K, to the point that A and B can swap places in the ranking from one run of the assay to the next!
So if one knew the error on K_A and K_B, he could decide whether the potencies of A and B are to be considered statistically different or not.
If they are, then the probability that at the next run of the assay the ranking of A and B will change is low, and one can confidently make synthetic plans based on A being on top. If they aren't, then perhaps one will be more careful in preferring A over B, he will retest both molecules and/or make analogues of both.

The errors in R_A(c) can be investigated in various ways depending on what data exists. Of course the idea situation is to have "true" values of R_A(c) vs predicted values. We could also investigate the errors in R_A(c) and K_A as function of an assumed distribution of errors in the concentration measurement. However, you indicate the errors in the concentration measurement are small. So unless the value of K_A is very sensitive to the concentration measurements, error in those measurements alone won't adequately explain errors in K_A.
In fact you're right, errors on the concentration do affect K_A very strongly, mostly because K_A is indeed a point on the x axis, which is the axis of concentrations. As I explained above, though, the concentrations of the so-called 'screening solutions' aren't so much off the mark, they only vary minimally around the desired value, and surely the dilution isn't the problem. Besides, on retest it's most often the same solution that is used, so if a completely different result is obtained, chances are it's the biological response that has changed, not the chemical itself.
To prove what I'm saying, I pasted below two examples of dose-response curves.
The X axis is the log10 of the molar concentration (and that's usually the range that is used). The Y axis is the biological response, normalised using some standards.
You can see that for each concentration, the *same diluted solution* is added in duplicate, to two separate 'wells', that's why for each X there are two Y dots.
Do you see how far the two points in each duplicate can be? And this is even a nice case, we often see much worse!
The pink dots are those that make so little sense that the software discards them (fair enough, they are blatant outliers, probably due to precipitation or some other non-random effect). But still, the black dots can go up and down a lot.
So the software fits the Hill equation to these points, and gives to the biologist a choice (the buttons on the left) for the EC50 to assign to the molecule (EC50 is a function of K, as I explained above). The biologist tries each possibility, sees the corresponding curves (in red), and based on experience decides which curve best fits the dots.

Now, as you can see the software looks sophisticated enough, it even gives R squared, a standard error, etc.
For some reason this vital statistical information is not passed on to the chemist: from the curves below we would only get EC50=1157 for the top molecule and EC50=545 for the bottom one.
We don't have the time to go and reanalyse each curve ourselves, and besides we don't have access to the numerical data (the ones below are pictures even in the file that I can access at work).

How could we use the information from these fitting exercises to make more sense out of all this?

Dose_response_1.PNG

Dose_response_2.PNG
 
  • #9
lavoisier said:
How could we use the information from these fitting exercises to make more sense out of all this?

To tackle that - first, I need to confirm the information that is revealed to you. Do you get anything besides the EC50 and Rmax?

You said the graphs are exercises, but are they from substances that are in some sense representative of substances you deal with?

I have some questions about ligand binding.

The Wikipedia article on the Hill equation says:

In biochemistry, the binding of a ligand to a macromolecule is often enhanced if there are already other ligands present on the same macromolecule (this is known as cooperative binding). The Hill coefficient provides a way to quantify this effect.

The equation does not explicitly contain the variable "time". Time is implicit in the variables [itex] \theta [/itex] and [itex] [L] [/itex]. As I visualize the process taking place in time, I put macromolecules and ligands into a solution. The binding starts. As it progresses the rate of binding due to "already ligands present on the same macromolecule" increases. (That's how I interpret "is often enhanced".) The concentration of unbound ligands in the solution would decrease.

If I understand this correctly then how does the lab associate a single value of [itex] \theta [/itex] for a single value of "concentration"? In the well with the solution, both values would be changing in time - correct?

It describes the fraction of the macromolecule saturated by ligand as a function of the ligand concentration; it is used in determining the degree of cooperativeness of the ligand binding to the enzyme or receptor.

Should I take "the macromolecule" to be an individual molecule? Or does "the fraction" refer to number of bound ligands divided by number of molecules - i.e. does Hill's equation describe a statistical property of a population of molecules where some members of the population might get "completely saturated" while others have varying amounts of saturation?The Wikipedia article on Hill's equation says it predicts:

[itex] \theta [/itex]: fraction of the ligand-binding sites on the receptor protein which are occupied by the ligand

To create an effective medicine, what is goal as far as binding goes?

For example, the following are related but different goals

1) Have as many macromolecules as possible with at least N ligands bound to them

2) Have as many ligands as possible bound to some macromolecule (not caring how many are bound to each molecule).

Some questions related to "the bottom line" of the problem.

In one post, you said:
As it's not easy to reach high concentrations of a molecule in the body of an animal, and also in that case toxicity is more likely to occur, we want a small K.

Suppose I think in terms of two ingredients, ligands and macromolecules each in some "neutral" solvent Which of these is in the body of the animal naturally and which is in the medicine? - or is that too oversimplified? What I want to get at is whether a certain range of concentrations on those curves is the one relevant to your work.

Typical curve fitting methods are somewhat democratic. They try equally hard to make the curve fit at all values of the independent variable. Of course if the fit is done democratically to the logarithm of x, that doesn't treat the underlying values of x democratically.

For example, suppose your problems involve only the binding that occurs at low concentrations. And suppose you only need to bin 0.25 of the "fractions of bound ligand sites" to have an effective drug. Then the bottom line questions don't involve the entire curve of Hill's equation. The bottom line for uncertainty isn't the uncertainty in the reported parameters, it is the uncertainty in certain areas of the curve that is caused by the uncertainty in the parameters.
 
  • #10
Hi Stephen, you ask difficult questions :O).
I'll start with the easy ones.
The graphs above are actual molecules that I'm working with, I used the word 'exercises' but I meant experiments, sorry for the confusion.
What I get in the form of numbers that I can export is: EC50, Hill slope, bottom, top (which in the examples is fixed, but in other cases can vary). We could say Rmax is top - bottom. The rest (e.g. the R squared, Std. error on EC50,..) is only shown to me as pictures, in the curves you see above. If you think any of these additional parameters can be useful to solve the problem we're talking about, I can ask IT to get those data too into the system.

Now for the difficult stuff.
Let's please leave out cooperative binding, it doesn't really apply to the type of targets we work with. I know the Hill slope has to do with it, but chances are in this case it's due to other effects that are too difficult to model.
Most of the time our target is one macromolecule (very often a protein), which has one binding pocket where one ligand molecule can bind. So there is only one possible target-ligand complex, you don't have more than one ligand molecule bound to each molecule of target.
Of course we have many units of target and many units of ligand in the system, we can't handle individual molecules. So yes, I suppose you can say it's a statistical property of a system of many units that we measure.
The way chemists express how many molecules there are is of course concentration. If I say that a solution of ligand is 1 molar, it means that in 1 litre of solution there is 1 mole (i.e. an Avogadro number of molecules) of ligand. The Avogadro number is something in the region of 1023-1024 if I remember correctly.
The way chemists describe the binding behaviour of a ligand-target system is a reversible reaction of the type:
T + L → TL
TL → T+L
where T is the target and L is the ligand (my molecule). TL is the complex between the two. [TL must be able to dissociate back to T and L, otherwise you have covalent binding and the top reaction is irreversible. That's another story].

Indeed, as you point out there is an element of time that is not explicitly expressed here. I'll explain.
These two reactions occur at the same time but at different rates.
Say that the top one (association) occurs with a rate constant k1 and the bottom one (dissociation) occurs with a rate constant k2.
Then (in the simplest case) the system is described by a differential equation ([A] means concentration of A):
[itex]\frac{d[L]}{dt}=-k_1 [T][L]+k_2 [TL][/itex]
plus two mass balances:
[itex][L]_{tot}=[L]+[TL][/itex]
[itex][T]_{tot}=[T]+[TL][/itex]
by which you can eliminate [T] and [TL] from the equation.
The initial conditions at time t=0 (when the biologist mixes the solution of L into the system that contains T) are:
[itex][L]=[L]_{tot}, [T]=[T]_{tot}, [TL]=0[/itex]

You are surely better than me at understanding what this implies mathematically.
What I know is that at time t=0 these reactions start to happen, and given enough time, an 'equilibrium' is reached, which means that the number of molecules of TL formed per unit time is the same as the number of molecules TL that dissociate to give back T and L.
A stationary state is reached, by which [L] stops changing. We can write:
[itex]\frac{d[L]}{dt}=0[/itex]
[itex]k_1 [T]_{eq}[L]_{eq}=k_2 [TL]_{eq}[/itex]
And by rearranging:
[itex]\frac{k_2}{k_1}=\frac{ [T]_{eq}[L]_{eq} }{[TL]_{eq}}=K_D[/itex]

So at 'equilibrium', the relationship between the concentrations of T, L and TL are determined by an 'equilibrium constant' KD, which is essentially another way of expressing EC50 (although depending on the assay it may differ from it by a constant).
In the experiments we talked about, we change [L]tot, whereas [T]tot is constant, so what changes at equilibrium is [TL]eq.
The response given by the biological system is proportional to [TL]eq, and EC50 can be determined by drawing the curves shown above and determining which value of [L]tot causes half of the maximal possible response.
There has been discussion in the past whether we leave enough time for binding to reach equilibrium, and indeed in some cases slow association or dissociation play important roles. There's an important branch of medicinal chemistry that studies this (binding kinetics etc).
In practice, we usually leave such a long time that it's unlikely we are very far from equilibrium.

Finally, concerning your questions on targets and efficacy.
Yes, the target is in the animal and the ligand is my molecule, that I hope can be a medicine.
As for the range of concentrations, people usually want to act on the target quite strongly, which means the range of ligand concentrations we want to reach is typically well above EC50, if there are no toxic effects. For instance one can ask to be above EC90 (concentration that gives 90% of the maximal response, it's a multiple of EC50 by a constant that depends on the Hill slope) for the whole duration of the treatment. Of course a molecule with low EC50 will facilitate this task.
 
  • #11
To answer the question in your original post about how to estimate the standard deviations for Rmax and K requires knowing how the program fits the Hill curve to data. I'd like to make sure that those two parameters are the only ones being used to fit the curve.

I found this free software that gives a "prediction band" (mentioned on page 71):
"Computational tools for fitting the Hill equation to dose-response curves" by Gadagkar and Call in Journal of Pharmacological and Toxicological Methods. http://www.sciencedirect.com/science/article/pii/S1056871914002470

This article speaks of a 4 parameter fit. I'll have to read the fine print to see how to get actual computer codes.
 
  • #12
This is brilliant! Thank you very much Stephen!
And the article is very recent, suggesting there is an ongoing discussion on the subject in pharmacology.
So perhaps it wasn't just me being pedantic about statistics...
[FYI, I think our biologists use Prism].
 

1. What is the source of error in biological assay results?

The source of error in biological assay results can vary depending on the specific experiment and its conditions. Some common sources of error in biological assays include human error, contamination, instrument malfunction, and inconsistencies in sample preparation or handling.

2. How can human error affect biological assay results?

Human error can have a significant impact on biological assay results if experimental procedures are not followed accurately. This can lead to incorrect measurements, misinterpretation of data, or contamination of samples. It is important for scientists to carefully follow protocols and double-check their work to minimize the potential for human error.

3. What measures can be taken to reduce error in biological assays?

To reduce error in biological assays, scientists can follow strict protocols and standard operating procedures, properly calibrate instruments, use appropriate controls, and conduct experiments in a controlled environment. It is also important to carefully document all steps and results to ensure accuracy and reproducibility.

4. How can contamination affect biological assay results?

Contamination can occur in biological assays when external substances, such as microbes or chemicals, enter the sample or experimental environment. This can alter the results and lead to inaccurate conclusions. To prevent contamination, scientists must maintain clean and sterile conditions, use proper handling techniques, and regularly check for any signs of contamination.

5. What can be done if unexpected errors are encountered in biological assay results?

If unexpected errors are encountered in biological assay results, scientists should carefully review their procedures and data to identify the source of the error. It may be necessary to repeat the experiment or make adjustments to the protocol. In some cases, seeking input from colleagues or consulting relevant literature can also help troubleshoot and resolve any errors.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
28
Views
3K
  • General Math
Replies
28
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
6
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
3K
Replies
11
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Back
Top