- 3,372
- 465
Hi, suppose that you have some function: F(x;a)
where x is the variable with which you plot the function and a is some parameter which enters the function.
If I want to find the error coming from some uncertainty in a, computationally, I would have to plot the function for 2 different let's say values of a: Let's say that this means to plot the functions below:
F(x;a)
F(x;2a)
Then I believe the error then can can be computed by (their difference):
F(x;2a)-F(x;a)
as well as (their fluctuation)
\frac{F(x;2a)-F(x;a)}{F(x;a)}
Which of these two are best for a plotting? Is there some physical meaning behind any of these two? like they are showing something different to the reader?
where x is the variable with which you plot the function and a is some parameter which enters the function.
If I want to find the error coming from some uncertainty in a, computationally, I would have to plot the function for 2 different let's say values of a: Let's say that this means to plot the functions below:
F(x;a)
F(x;2a)
Then I believe the error then can can be computed by (their difference):
F(x;2a)-F(x;a)
as well as (their fluctuation)
\frac{F(x;2a)-F(x;a)}{F(x;a)}
Which of these two are best for a plotting? Is there some physical meaning behind any of these two? like they are showing something different to the reader?