Optimization problem: Error mitigation while using trigonometry

  • #1
Juanda
Gold Member
434
141
TL;DR Summary
Measuring the height using trigonometry is possible and simple. Finding the optimal distance to do it is causing me more trouble.
I like optimization problems a lot because they apply math to make the right decisions. However, I often come up with problems that are too hard for me to solve. Hopefully, this one will be simple but interesting enough for someone here to help me to crack it.

Imagine there is a building or similar thing that you want to measure. It's inaccessible or too tall to measure it directly so we'll use trigonometry because measuring the number of steps from it and the angle with a protractor is simpler.
1738422500160.png

To find the height it's possible to use the following formula.
$$h=d\tan(\alpha)$$.
However, this is not the ideal world. We want to achieve an accurate measurement and there is some uncertainty attached to all the values we take from the real world.
For the angle, we will assume a constant uncertainty.
$$\alpha = \alpha_{avg} \pm \alpha_{unc}$$ where $$\alpha_{unc}=0.001 \rightarrow \alpha=\alpha_{avg} \pm 0.001$$.
On the other hand, for the distance, the uncertainty will be a function of the distance itself since measuring longer distances is harder.
$$d=d_{avg} \pm d_{unc}$$ where $$d_{unc}=d_{avg}*0.1 \rightarrow d=d_{avg} \pm d_{avg}*0.1 =d_{avg} (1 \pm 0.1)$$.
In conclusion, the measured height will be
$$h=d_{avg} (1 \pm 0.1)\tan(\alpha_{avg} \pm 0.001)$$.
I'm trying to understand what will be the ideal distance to walk from the building to take the measurements. That is, obtaining the result ##h## with the smallest uncertainty.
I could take a couple of steps and measure a big angle or take thousands of steps and measure a small angle. In an ideal world, both resulting values of ##h## will be exactly the same. Here where uncertainty is introduced it's a different story.

Observing the behavior of ##tan(\alpha)## it seems it is convenient to walk far away from the building so the measured angle will be small. This is relevant because the slope of the function is smaller for small angles so errors in the measurement won't translate into big errors in the result.
1738423551969.png

However, walking a lot from the building increases the uncertainty in the measurement of ##d##. There must be a sweet spot where the total uncertainty of the resulting ##h## is minimized. I just have not been able to find it yet.

This is as far as I have gotten. Do you know how to continue?

By the way, I'm OK accepting more realistic definitions for ##\alpha_{unc}##. I just thought the problem was complex enough and the assumptions I made felt valid if the extremes are ignored.

Thanks in advance.
 
Physics news on Phys.org
  • #2
Hi,

Do you know how uncertainties propagate?

##\ ##
 
  • #3
Hello

I didn't. But whenever I come up with one of these problems it keeps bouncing in my head so I spent most of the evening thinking about it. Eventually, I gave DeepSeek a shot and I was pleasantly surprised. It can be a very good professor and code assistant if you have prior knowledge on the matter.

AIs are known to sometimes make up their outputs (hallucinations) so I'll post the details of what I learned here. Maybe you can confirm whether it's correct or wrong.

I'll rename some of the variables to make it clearer. The measured horizontal distance will be $$d \pm \Delta d,$$ where ##d## is the average of the measurements and ##\Delta d## will be the uncertainty. Similarly, the measured angle will be $$\alpha \pm \Delta\alpha.$$ The computed height will be $$h \pm \Delta h.$$

Due to the trigonometric nature of the problem, we know $$h = d\tan\alpha.$$ Now comes the new stuff for me (maybe I studied but I didn't remember it). The uncertainty in the result ##\Delta h## will be a result of the values from which ##h## is computed and their respective uncertainties. To be precise, $$\Delta h = \left| \frac{\partial h}{\partial d} \right| \Delta d + \left| \frac{\partial h}{\partial \alpha} \right| \Delta \alpha.$$ I don't have enough background to confirm the validity of that expression but it seems right. It implies that, if the function has a great slope, the resulting uncertainty will be bigger because a small variation in the input values will cause a big impact on the output.
Those partial derivatives can be computed because the function ##h## is known.
$$\frac{\partial h}{\partial d}=\tan\alpha.$$
$$\frac{\partial h}{\partial \alpha} = d\sec^2\alpha.$$
The distance ##d## will always be positive and the angle ##\alpha## will be constrained between ##0## and ##\pi/2## so we can ignore the absolute values because the functions are positive in the studied interval.
I want to find which ##d## generates the smallest uncertainty ##\Delta h## so it's necessary to write it as a function of ##d## by getting rid of ##\alpha## and derive. We know that $$\tan\alpha = \frac{h}{d}$$ and $$\sec^2\alpha = 1+(\frac{h}{d})^2.$$ As a result, we get the expression $$\Delta h = \frac{h}{d}\Delta d + d (1+(\frac{h}{d})^2) \Delta \alpha.$$ This by the way requires knowing ##h## which is not possible since that's the very objective of the problem. Remember, we wanted to find which ##d## will generate the most precise calculation of ##h##. Or in other words, finding the ##d## that minimizes ##\Delta h##. However, we can input an estimation of ##h## and continue the calculations from there.
Finally, it's possible to define ##\Delta d## and ##\Delta \alpha## as desired and compute $$\frac{d\Delta h}{dd}=0$$ to find the minimum of the function ##\Delta h(d)##. I calculated it numerically though using code also created with the help of DeepSeek. These are the results for an estimated height ##h=20## for a couple of different cases where the definitions of ##\Delta d## and ##\Delta \alpha## are shown in the legend of the plot.
1738447257342.png


This is the code I used in case someone wants to play around with it.
Uncertainty_propagation_trig_calc:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize_scalar
import inspect

# Fixed height of the building
h = 20  # Example value

# Helper function to create uncertainty functions with dynamic labels
def create_unc_function(func, func_name):
    # Extract function source code
    source = inspect.getsource(func).strip()
    # Extract the right-hand side of 'return'
    expression = source.split("return")[1].strip()
    # Remove any comments from the expression
    expression = expression.split("#")[0].strip()
    # Construct label dynamically
    label = rf'$\Delta d = {expression}$' if 'd_unc' in func_name else rf'$\Delta \alpha = {expression}$'
    return func, label

# Define uncertainty functions with dynamic labels
def d_unc_1():
    def func(d):
        return d * 0.1 + 0.5  # Case 1 d_unc
    return create_unc_function(func, "d_unc_1")

def d_unc_2():
    def func(d):
        return d * 0.1 + 0.5  # Case 2 d_unc
    return create_unc_function(func, "d_unc_2")

def alpha_unc_1():
    def func(d):
        return 0.01  # Case 1 alpha_unc
    return create_unc_function(func, "alpha_unc_1")

def alpha_unc_2():
    def func(d):
        return d * 0.005 + 0.01  # Case 2 alpha_unc
    return create_unc_function(func, "alpha_unc_2")

# List of uncertainty functions and their labels
d_unc_functions = [d_unc_1(), d_unc_2()]  # Each element is a tuple (func, label)
alpha_unc_functions = [alpha_unc_1(), alpha_unc_2()]  # Each element is a tuple (func, label)

# Uncertainty function for h
def delta_h(d, h, d_unc_func, alpha_unc_func):
    # Compute uncertainty in h
    term1 = (h / d) * d_unc_func(d)  # Uncertainty contribution from d
    term2 = d * (1 + (h / d)**2) * alpha_unc_func(d)  # Uncertainty contribution from alpha
    return term1 + term2

# Function to find optimal distance with interval adjustment
def find_optimal_distance(h, d_unc_func, alpha_unc_func, d_min=1, d_max=25, max_expansions=5):
    for _ in range(max_expansions):
        result = minimize_scalar(lambda d: delta_h(d, h, d_unc_func, alpha_unc_func), bounds=(d_min, d_max), method='bounded')
        optimal_d = result.x
        # Check if the solution is at the bounds
        if np.isclose(optimal_d, d_min):
            d_min /= 2  # Expand the lower bound
        elif np.isclose(optimal_d, d_max):
            d_max *= 2  # Expand the upper bound
        else:
            return optimal_d, result.fun, d_min, d_max
    return optimal_d, result.fun, d_min, d_max

# Generate plots for diagonal cases (1-1, 2-2)
def generate_diagonal_plots(h, d_unc_functions, alpha_unc_functions):
    # Individual plots
    for i in range(len(d_unc_functions)):
        d_unc_func, d_unc_label = d_unc_functions[i]  # Unpack function and label
        alpha_unc_func, alpha_unc_label = alpha_unc_functions[i]  # Unpack function and label
        # Find optimal distance
        optimal_d, min_h_unc, d_min, d_max = find_optimal_distance(h, d_unc_func, alpha_unc_func)
        # Generate range of distances
        d_values = np.linspace(d_min, d_max, 1000)
        h_unc_values = [delta_h(d, h, d_unc_func, alpha_unc_func) for d in d_values]
        # Plot
        plt.figure(figsize=(10, 6))
        plt.plot(d_values, h_unc_values, label=r'$\Delta h(d)$')
        # Add vertical line for optimal distance
        plt.axvline(x=optimal_d, color='r', linestyle='--', label=f'Optimal Distance: {optimal_d:.2f} m')
        # Add horizontal line for minimum uncertainty
        plt.axhline(y=min_h_unc, color='g', linestyle='--', label=f'Minimum Uncertainty: {min_h_unc:.2f} m')
        plt.xlabel(r'Distance $d$ (m)')
        plt.ylabel(r'Uncertainty $\Delta h$ (m)')
        plt.title(f'(Case {i+1}-{i+1}): Uncertainty in Height {d_unc_label}, {alpha_unc_label}')
        plt.legend()
        plt.grid(True)
        plt.show()
        print(f"Case {i+1}-{i+1}: Optimal Distance = {optimal_d:.2f} m, Minimum Uncertainty = {min_h_unc:.2f} m")

    # Overlaid plots
    plt.figure(figsize=(10, 6))
    for i in range(len(d_unc_functions)):
        d_unc_func, d_unc_label = d_unc_functions[i]  # Unpack function and label
        alpha_unc_func, alpha_unc_label = alpha_unc_functions[i]  # Unpack function and label
        # Find optimal distance
        optimal_d, min_h_unc, d_min, d_max = find_optimal_distance(h, d_unc_func, alpha_unc_func)
        # Generate range of distances
        d_values = np.linspace(d_min, d_max, 1000)
        h_unc_values = [delta_h(d, h, d_unc_func, alpha_unc_func) for d in d_values]
        # Plot
        plt.plot(d_values, h_unc_values, label=f'(Case {i+1}-{i+1}): {d_unc_label}, {alpha_unc_label}')
        # Add vertical line for optimal distance
        plt.axvline(x=optimal_d, color='r', linestyle='--', label=f'Optimal Distance (Case {i+1}-{i+1}): {optimal_d:.2f} m')
        # Add horizontal line for minimum uncertainty
        plt.axhline(y=min_h_unc, color='g', linestyle='--', label=f'Minimum Uncertainty (Case {i+1}-{i+1}): {min_h_unc:.2f} m')
    plt.xlabel(r'Distance $d$ (m)')
    plt.ylabel(r'Uncertainty $\Delta h$ (m)')
    plt.title('Uncertainty in Height for Diagonal Cases')
    plt.legend()
    plt.grid(True)
    plt.show()

# Run the analysis for diagonal cases
generate_diagonal_plots(h, d_unc_functions, alpha_unc_functions)

I have just realized it may be interesting to check if this can be transformed into an iterative process. I mean, at some point I had to assume the height of the building ##h## to find the best distance ##d## from where to measure it. Then, go and measure the height from that point and reuse it to find the best ##d## to repeat the process. I don't know how to implement it now though and I'm kind of exhausted. I think what I learned for now is enough.
 
  • Like
Likes Lnewqban
  • #4
Juanda said:
Now comes the new stuff for me (maybe I studied but I didn't remember it). The uncertainty in the result Δh will be a result of the values from which h is computed and their respective uncertainties. To be precise,$$\Delta h = \left| \frac{\partial h}{\partial d} \right| \Delta d + \left| \frac{\partial h}{\partial \alpha} \right| \Delta \alpha.$$ I don't have enough background to confirm the validity of that expression but it seems right.
Not bad for an AI, but slightly pessimistic. If the errors are independent, they add in quadrature (see case ##f = AB## here . If the errors are independent, correlation ##\sigma_{AB}## is zero).

It's late, but the expression is something like $$\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2 $$
With some manipulating for the second term we end up with (I think, need to check tomorrow): $$\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac 2 {\sin 2\alpha} \right )^2 \left (\Delta \alpha \right )^2 \tag {1}$$

Your excursion with ##h/d## muddies the water; the measurement of ##\alpha## does not involve them.

Because of your choice for ##\Delta d## the first term in ##(1)## is fixed. The second term explodes for ##\alpha \downarrow 0## and for ##\alpha \uparrow \pi/2## as you found, but it has a minimum at ##\alpha = \pi/4##, which I kind of like.

##\ ##
 
  • #5
Thanks for the input. The procedure and result seemed valid at first glance but it's good to compare it with this approach you're showing.

BvU said:
Not bad for an AI, but slightly pessimistic.
I have to admit the expression came out of the AI with an approximation symbol ##\approx## instead of an equal symbol ##=## but it felt strange and I replaced it when writing it here. I guess I shouldn't have done that.
$$\Delta h \approx \left| \frac{\partial h}{\partial d} \right| \Delta d + \left| \frac{\partial h}{\partial \alpha} \right| \Delta \alpha.$$

I'll try to understand what you did.
BvU said:
It's late, but the expression is something like $$\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2 $$
From the link you sent, it seems I'd need to combine these two expressions.
1738493643174.png

1738493521367.png

1738493648392.png

Linking it to this case, we have $$f=AB \rightarrow h=d\tan\alpha$$ so ##f=h##, ##A = d##, and ##B = \tan\alpha##.
Then, form the other row in the table we have (I'll use * to indicate it's not the same parameters as above) $$f^* = a^*\tan(b^*A^*)$$ so ##f^*=\tan(\alpha)##, ##a^*=1##, ##b^*=1##, and ##A^*=\alpha##.

Following the previous notation from the thread, I'll use ##\Delta x## to denote the standard deviation of the variable ##x## instead of using ##\sigma_x## as it is done in Wikipedia. Probably not the best choice of symbols but it's to keep it consistent with the thread. I changed the notation already in the third post and doing it again would just add more confusion to it.

In conclusion, $$\Delta h = d\tan\alpha \sqrt{\frac{\Delta d}{d}+\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}+2\frac{\Delta (d\tan\alpha)}{d\tan\alpha}}.$$
(I ignored the absolute values ##||## again because the function is positive in the defined interval.)
(I just noticed the Wiki is also using ##\approx## instead of ##=## but it confuses me. I'll keep using ##=## if it's not too wrong.)


The term ##\Delta (d\tan\alpha) = \sigma_{d\tan\alpha}## is the one I find most confusing. I think it's an indication of how related are they and if they influence each other but I don't get it yet. Also, are they really independent? I think in most cases it goes to 0 due to them being independent but I'd like to understand it better.

So the expression for ##\Delta h## shown before is the one I'd like to minimize. No need for further algebraic manipulation to simplify it. A computer can take care of it if needed.
The way it is now it's got two variables. You mentioned my attempt to get rid of one of them.
BvU said:
Your excursion with ##h/d## muddies the water; the measurement of ##\alpha## does not involve them.
But I don't see why that approach is not correct. In fact, I see it as necessary. I used the relation between the variables to be able to find the ##d## that minimizes ##\Delta h##. It's true I needed a first guess for the height of the building ##h## to be measured but I think that'd be fine and could even be implemented into an iterative process as mentioned at the end of the post #3.
How could I accomplish the minimization of ##\Delta h## without using the substitution of ##\tan\alpha=h/d##?

Lastly, from the example above the table in the same Wiki link, it seems the table pivots on the idea of linearization of the functions.
1738496736036.png

But when linearizing, we need to evaluate the value of the derivatives and the approximation is only valid in the region near the point chosen for the approximation. I don't see how this is related to the table below because we have not chosen a point for linearization and the example doesn't seem to evaluate the value of the derivatives either.

Thanks in advance.
 
  • #6
Juanda said:
I replaced it when writing it here. I guess I shouldn't have done that.
In the context of working on an error estimate I had no problem with the ##=## sign.

Juanda said:
In conclusion, $$\Delta h = d\tan\alpha \sqrt{\frac{\Delta d}{d}+\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}+2\frac{\Delta (d\tan\alpha)}{d\tan\alpha}}.$$
Is simply wrong. You are missing the squares. The correct expression for the estimated standard deviation in ##h## is $$
\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2 .$$
You misunderstand the meaning of ##\sigma_{ab}=\sigma_a \sigma_b \rho_{ab}\ ##, the covariance between ##a## and ##b##. It is zero if the measurement errors in ##d## and ##\alpha## are uncorrelated, which they are: one is measured with some surveyor's tool, the other with a protractor or a sextant. There is no reason to expect a measurement error in one of the two to have an influence on the error in the other.
I understand this is new for you. Perhaps check here on page 4.

Juanda said:
No need for further algebraic manipulation to simplify it. A computer can take care of it if needed.
Funny statement, almost as if you don't want to understand.

Juanda said:
But I don't see why that approach is not correct.
What can I say without repeating myself ? It's wrong. There is no ##d## in the second term of ##(1)##, only ##\alpha##.

Juanda said:
How could I accomplish the minimization of ##Δh## without using the substitution of ##\tan\alpha=h/d##?
You differentiate the first term in ##(1)## wrt ##d## ##-## that gives you zero (the first term being the number 0.01). In words: there is no optimum ##d##. Makes sense: a ten percent error in ##d## determines a lower bound for the error in ##h##.

You differentiate the second term wrt ##\alpha## and set it to zero to find ##\alpha=\frac \pi 4##.
(or do it by inspection: ##\sin(2\alpha)## has a maximum at ##\alpha=\frac \pi 4##.)

And yes, from the optimum value for ##\alpha## you can calculate a best value for ##d##. So set your protractor to 45 degrees and repeat the measurement of ##d## as often as you can afford until you reach the relative error in ##\alpha## ( ##\frac {1}{1000}\sqrt {1/2}## ).

----------

Juanda said:
But when linearizing, we need to evaluate the value of the derivatives and the approximation is only valid in the region near the point chosen for the approximation. I don't see how this is related to the table below because we have not chosen a point for linearization and the example doesn't seem to evaluate the value of the derivatives either.
Error handling and data analysis is indeed a first order approximation business. The point for linearization is always the best estimate available from the measurements since the true value is unknown.

The example does evaluate the derivatives ("In the particular case that ##f = ab##"...)

##\ ##
 
Last edited:
  • #7
BvU said:
Is simply wrong. You are missing the squares. The correct expression for the estimated standard deviation in ##h## is $$
\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2 .$$
You're right I forgot the squares. I added them below.
$$\Delta h = d\tan\alpha \sqrt{\left(\frac{\Delta d}{d}\right)^2+\left(\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}\right)^2+2\frac{\Delta (d\tan\alpha)}{d\tan\alpha}}$$
All the procedure is taken from the Wiki link you shared. I don't see how it could be wrong except for the deadly typo of missing the squares you corrected. It should be the result after leaving ##\Delta h## alone and without yet considering the covariance to be null which is addressed below.

BvU said:
You misunderstand the meaning of ##\sigma_{ab}=\sigma_a \sigma_b \rho_{ab}\ ##, the covariance between ##a## and ##b##. It is zero if the measurement errors in ##d## and ##\alpha## are uncorrelated, which they are: one is measured with some surveyor's tool, the other with a protractor or a sextant. There is no reason to expect a measurement error in one of the two to have an influence on the error in the other.
Got it. The plots from the Wiki article were especially helpful.
1738522822360.png

I think that, even in the case where the uncertainty of ##\alpha## is a function of the distance ##d##, the covariance is still 0. Isn't it? The uncertainty in the angle might grow with the distance but the expected value will still oscillate around the correct angle. It'd be like the second picture but with a bigger or smaller circle with the radius of that circle being a function of the distance.

BvU said:
I understand this is new for you. Perhaps check here on page 4.
Isn't that what I did to obtain the expression for ##\Delta h## by following the Wiki article? Although I missed the square signs on the first attempt.

BvU said:
Funny statement, almost as if you don't want to understand. (Referencing Juanda's comment: No need for further algebraic manipulation to simplify it. A computer can take care of it if needed.)
On the contrary. I want to understand this topic in the general case and I don't see much added value in the algebraic manipulations to obtain a slightly simpler expression which is not necessary. I'd rather focus the effort on the parts I don't understand. I honestly don't understand what made you think that in the first place but it felt bad.

BvU said:
You differentiate the first term in ##(1)## wrt ##d## ##-## that gives you zero (the first term being the number 0.01). In words: there is no optimum ##d##. Makes sense: a ten percent error in ##d## determines a lower bound for the error in ##h##.

You differentiate the second term wrt ##\alpha## and set it to zero to find ##\alpha=\frac \pi 4##.
(or do it by inspection: ##\sin(2\alpha)## has a maximum at ##\alpha=\frac \pi 4##.)

And yes, from the optimum value for ##\alpha## you can calculate a best value for ##d##. So set your protractor to 45 degrees and repeat the measurement of ##d## as often as you can afford until you reach the relative error in ##\alpha## ( ##\frac {1}{1000}\sqrt {1/2}## ).
I think I'm not following here. I now have an expression for ##\Delta h## which is a function of ##d## and ##\alpha##. Do I start working on finding its minimum?
$$\Delta h = d\tan\alpha \sqrt{\left(\frac{\Delta d}{d}\right)^2+\left(\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}\right)^2+0}$$
Considering ##\Delta (d\tan\alpha)=0## because the variables ##d## and ##\alpha## are independent in terms of expected values, ##\Delta d = 0.1d## and ##Delta \alpha = 0.001##. Calculate
$$\frac{\partial \Delta h}{\partial d} = 0$$
and
$$\frac{\partial \Delta h}{\partial \alpha} = 0$$
to find the minimum of the function ##\Delta h(d, \alpha)##.

There is something I'm not getting with this approach because the fact that the two variables are independent creates a 3D surface and its minimum is not related to the distance that would minimize the uncertainty of the calculation.
1738526639398.png


Only a certain combination of distances and angles is to be expected from the measurements because they're related to the height of the building. So it'd be more of a curve instead of a surface. But then I go back to using ##\tan\alpha=h/d## and you said that only muddies the problem.
1738527524251.png
 

Attachments

  • 1738527387988.png
    1738527387988.png
    31.2 KB · Views: 2
Last edited:
  • #8
Juanda said:
I want to understand this topic in the general case and I don't see much added value in the algebraic manipulations to obtain a slightly simpler expression which is not necessary. I'd rather focus the effort on the parts I don't understand. I honestly don't understand what made you think that in the first place but it felt bad.
Point taken, I apologize.

Your expression still misses a square: ##\displaystyle \frac {d\;\tan\alpha}{d\alpha} = \sec^{\color{red} 2}\alpha ##.

Juanda said:
I think I'm not following here. I now have an expression for Δh which is a function of d and α. Do I start working on finding its minimum?
My 'mistake': the minimum absolute error isn't very exciting: it is zero for ##d=0## or ##\alpha=0##.

What I did in the paragraph preceding was to show that the relative error in ##h## is minimized for ##\alpha = \pi/4##. Its value is at least 10%. The second term ##0.001*2/\sin(2\alpha)## is virtually never making a difference.

How you can find a minimum ##\Delta h = 0.01## is a mystery to me. Didn't you say ##\Delta d = 0.1 d## ?
Or is it simply that you started the ##d## axis at ##d=0.1## ?

In the second figure I gather the blue line is where ##d\tan\alpha = 20## ? And your red dot minimum ##\Delta h## is when ##d = 40## m ? And what ##\alpha##? (Where I would expect a minimum ##\Delta h = 2## m for ##d=20## m and ##\alpha = \pi/4##)


And $$\frac {\sec^2\alpha }{\tan\alpha }=\frac{1}{\sin\alpha\cos\alpha}=\frac{2}{\sin(2\alpha)}.$$ So much easier to understand! :wink:

##\ ##
 
  • #9
How about the usual optimization problem?

$$ \epsilon = ( 1+k) x \tan ( \theta + \beta ) - x \tan \theta $$

##\epsilon## would be the error. ##k , \beta ## is some constant ( positive or negative )

Then simultaneously solve to find a critical point:

$$ \frac{\partial \epsilon}{ \partial x } = 0 $$

$$ \frac{\partial \epsilon}{ \partial \theta } = 0 $$

Second Partials Test

etc... I haven't tried it (probably a disaster), but I thought I'd toss it out there.
 
  • #10
BvU said:
Point taken, I apologize.
No worries. I see the great work you all do in this forum and how frustrating it can be sometimes. We're all humans.
Something I love about these AIs is that I can just keep asking for more details indefinitely until I understand things better using only my time. However, being able to check it with real experts is extremely helpful.

BvU said:
Your expression still misses a square: ##\displaystyle \frac {d\;\tan\alpha}{d\alpha} = \sec^{\color{red} 2}\alpha ##.

(I believe you were referring to the expression ##\Delta h = d\tan\alpha \sqrt{\left(\frac{\Delta d}{d}\right)^2+\left(\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}\right)^2+0}##)
I think there might be a misunderstanding here. In that instance, the ##d## refers to the distance, not the derivative. Once again proving I didn't make the best choice for the symbols.
I'll still redo the process to verify if I get the same result.
We have $$h = d\tan\alpha.$$
The uncertainties of the measurements of the distance ##d## and the angle ##\alpha## are ##\Delta d## and ##\Delta \alpha## respectively and will be considered known functions. For example, ##\Delta d = 0.1d## and ##\Delta \alpha = 0.001## which was used in the previous post #7 (I noticed an error in the code that generated the 3D surface by the way after you pointed out the expected minimum value. More on that later.).
$$\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2 \rightarrow \Delta h = h\sqrt{\left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2}$$
which is given by the case ##f = ab## when the uncertainties in the two measurements are not related.
Where ##h## and ##\Delta d## can be substituted, for ##\Delta \alpha## we need the case ##f^* = a^*\tan(b^*A^*)## which results in ##\Delta \tan\alpha = \sec(\alpha)\Delta \alpha##. (I think I explained it better in post #5 although I missed the square signs).
Finally, we can express the uncertainty in ##h## as
$$\Delta h = d\tan\alpha \sqrt{\left(\frac{\Delta d}{d}\right)^2+\left(\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}\right)^2}$$
If there is an error so far in this expression I certainly cannot find it. I don't substitute the expressions for ##\Delta d = 0.1d## and ##\Delta \alpha = 0.001## to keep it more general.
With that expression of ##\Delta h##, I can already work on minimizing it. Either through derivatives or numerically finding the lowest point in the 3D surface.

BvU said:
My 'mistake': the minimum absolute error isn't very exciting: it is zero for ##d=0## or ##\alpha=0##.

What I did in the paragraph preceding was to show that the relative error in ##h## is minimized for ##\alpha = \pi/4##. Its value is at least 10%. The second term ##0.001*2/\sin(2\alpha)## is virtually never making a difference.

How you can find a minimum ##\Delta h = 0.01## is a mystery to me. Didn't you say ##\Delta d = 0.1 d## ?
Or is it simply that you started the ##d## axis at ##d=0.1## ?

In the second figure I gather the blue line is where ##d\tan\alpha = 20## ? And your red dot minimum ##\Delta h## is when ##d = 40## m ? And what ##\alpha##? (Where I would expect a minimum ##\Delta h = 2## m for ##d=20## m and ##\alpha = \pi/4##)
You are totally right to point out the strange values I had before. I took a closer look at the code and I realized I introduced a typo so ##\Delta d## was not correctly defined. I was plotting for ##\Delta d = 0.1## instead of ##\Delta d = 0.1d##. I corrected it and added a bit more info to the legend of the plot. I was printing results in the console but I didn't share that. Now the main information is all in the same place.
1738607248833.png

However, I noticed this method is too dependent on mesh refinement. I realized the red dot moved along the curve and the uncertainty was always extremely close to ##\Delta h = 2##.
It's when I realized that all that blue curve is pretty much at ##\Delta h = 2## which I confirmed by including a plane and calculating the intersection.
(The legend is not being displayed correctly for some reason. The orange points are the intersection between the curve and the plane.)
1738608562418.png


My conclusion from those results is that, at least for the defined ##\Delta d = 0.1d## and ##\Delta \alpha = 0.001##, the uncertainty almost doesn't change in the studied range (ignoring extreme angles) which is quite counterintuitive to me, to be honest.

BvU said:
And $$\frac {\sec^2\alpha }{\tan\alpha }=\frac{1}{\sin\alpha\cos\alpha}=\frac{2}{\sin(2\alpha)}.$$ So much easier to understand! :wink:
I believe you want to apply that to the expression of ##\Delta h##, but is it possible? In there, both the numerator and denominator are squared. Still, I have applied a trigonometric simplification below as you suggested.


I'm having a hard time swallowing the conclusion from the numerical analysis. I'll try to find the minimum value of the surface with partial derivatives too. I should get the same result if the previous process was correct.
$$\Delta h = d\tan\alpha \sqrt{\left(\frac{\Delta d}{d}\right)^2+\left(\frac{\sec(\alpha)\Delta \alpha}{\tan\alpha}\right)^2}$$
$$\frac{\sec\alpha}{\tan\alpha}=\frac{1}{\frac{\sin\alpha}{\cos\alpha}\cos\alpha}=\frac{1}{\sin\alpha}$$
$$\Delta h = d\tan\alpha \sqrt{\left(\frac{\Delta d}{d}\right)^2+\left(\frac{\Delta \alpha}{\sin\alpha}\right)^2}$$
$$\Delta d = 0.1d$$
$$\Delta \alpha = 0.001$$
$$\Delta h = d\tan\alpha \sqrt{0.1^2+\left(\frac{0.001}{\sin\alpha}\right)^2}$$
$$\frac{\partial \Delta h}{\partial d} = 0 \rightarrow \sqrt{\frac{1}{1000000 \sin^{2}\left({\alpha}\right)} + \frac{1}{100}} \tan\left({\alpha}\right) = 0 $$
1738612085337.png

The derivative becomes 0 in intervals of ##\alpha = n\pi## but our function only goes from ##0 < \alpha < \pi/2##.
$$\frac{\partial \Delta h}{\partial \alpha} = 0 \rightarrow \frac{10001d \left|\sin\left({\alpha}\right)\right|}{1000 \cos^{2}\left({\alpha}\right) \sqrt{10000 \sin^{2}\left({\alpha}\right) + 1}}=0$$
1738612073792.png

This expression has 2 variables. Seeing the results for some ##d## values it's possible to conclude the derivative becomes 0 in intervals of ##\alpha = n\pi## but our function only goes from ##0 < \alpha < \pi/2##.
So I arrived at the (0, 0) solution which is not too useful...

I'll try adding the ##\tan\alpha = h/d## again using an initial guess for ##h## which will be ##h_g=20##. This is because not all possible combinations of ##d## and ##\alpha## are possible. Their uncertainties may be uncorrelated but only a certain combination of values will actually describe the height of the building.
$$\Delta h = d\tan\alpha \sqrt{0.1^2+\left(\frac{0.001}{\sin\alpha}\right)^2} = h_g \sqrt{0.1^2+\left(\frac{0.001}{\sin(\arctan(\frac{h_g}{d}))}\right)^2}$$
1738612678818.png


Now I arrive again to the conclusion that no matter the distance, the uncertainty will be the same. I find it very counterintuitive but it's the result I get from that expression of ##\Delta h##. I feel like the result I got from post #3 made more sense. I checked this method with ##\Delta h = d\tan\alpha \sqrt{0.1^2+\left(\frac{0.001}{\sin\alpha}\right)^2}## using all the methods I can think of and the results and conclusions are the same. It's just that I have a hard time believing them.
 

Attachments

  • 1738611455438.png
    1738611455438.png
    9.5 KB · Views: 3
  • Like
Likes BvU
  • #11
erobz said:
How about the usual optimization problem?

$$ \epsilon = ( 1+k) x \tan ( \theta + \beta ) - x \tan \theta $$
I'm not familiar with that. In fact, I don't know the origin of that expression.
I'll try first to understand the conclusions from the method proposed by @BvU because I'm struggling with it already and later I can try the one you propose.
 
  • #12
Juanda said:
I'm not familiar with that. In fact, I don't know the origin of that expression.
I'll try first to understand the conclusions from the method proposed by @BvU because I'm struggling with it already and later I can try the one you propose.
I could be out in left field, don't worry about it.
 
  • #13
Is it possible the formula for ##\Delta h## is like this?
$$\Delta h = \sqrt{\left(\frac{\partial h}{\partial d} \Delta d\right)^2 + \left(\frac{\partial h}{\partial \alpha} \Delta \alpha\right)^2}$$
It looks a lot like what I did in post #3 but it includes the squares mentioned in post #4 instead of using absolute values.
I'm aware the result will change because the root and squares don't cancel because of the sum.
If it is right, I don't know how to go from
$$\left ( \frac {\Delta h} {h} \right )^2 = \left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2 \rightarrow \Delta h = h\sqrt{\left ( \frac {\Delta d} {d} \right )^2 + \left ( \frac {\Delta \tan\alpha} {\tan\alpha} \right )^2}$$
to
$$\Delta h = \sqrt{\left(\frac{\partial h}{\partial d} \Delta d\right)^2 + \left(\frac{\partial h}{\partial \alpha} \Delta \alpha\right)^2}.$$
It looks like it comes from
$$\left(\frac{\partial h}{\partial h}\Delta h\right)^2 = \left(\frac{\partial h}{\partial d} \Delta d\right)^2 + \left(\frac{\partial h}{\partial \alpha} \Delta \alpha\right)^2 \rightarrow \Delta h = \sqrt{\left(\frac{\partial h}{\partial d} \Delta d\right)^2 + \left(\frac{\partial h}{\partial \alpha} \Delta \alpha\right)^2}$$
So the similarities with the expression you proposed are certainly there but I'm just guessing.

Comparing the results of post #3.
$$\Delta h = \left| \frac{\partial h}{\partial d} \right| \Delta d + \left| \frac{\partial h}{\partial \alpha} \right| \Delta \alpha.$$
1738615754914.png


And this new version with the squares and square roots.
$$\Delta h = \sqrt{\left(\frac{\partial h}{\partial d} \Delta d\right)^2 + \left(\frac{\partial h}{\partial \alpha} \Delta \alpha\right)^2}$$
1738615783066.png


@BvU you mentioned how my initial guess was pessimistic. This new version with the squares has lower uncertainties as you predicted.
 
  • #14
Juanda said:
I think there might be a misunderstanding here. In that instance, the d refers to the distance, not the derivative. Once again proving I didn't make the best choice for the symbols.
The derivative of ##\tan\alpha## is ##\sec^2\alpha##, not ##\sec\alpha##. There are a lot of ##d## so I don't know which one you are referring to. I do mean the derivative.

Compare (##{\bf\color {red} d}## is distance, ##d## is the infinitesimal) $$
dh =\frac{\partial h}{\partial {\bf\color {red} d}} d{\bf\color {red} d} +\frac{\partial h}{\partial {\alpha}} d{\alpha}$$ which for error propagation becomes $$
\begin{align*}
\left (\Delta h\right )^2 &=\left (\frac{\partial h}{\partial {\bf \color {red} d}}\right )^{\!2} \left (\Delta{\bf\color {red} d} \right )^{\!2}
+\left (\frac{\partial h}{\partial {\alpha}}\right )^{\!2} \left (\Delta{\alpha}\right )^2\\ \ \\

&=\left (\tan\alpha\right )^2 \left (\Delta{\bf \color {red}d} \right )^2
+ \left ({\bf \color {red}d} \sec^2\alpha\right )^{\!2} \left (\Delta{\alpha}\right )^2 \\ \ \\ \ \\

\left (\frac{\Delta h}{h}\right )^{\!2} &= \left (\frac{\Delta{\bf \color {red}d} } {\bf \color {red}d}\right )^{\!2}
+\left (\frac {\sec^2\alpha} {\tan\alpha}\right )^{\!2} \left (\Delta{\alpha}\right )^2
\\ \ \\

&= \left (\frac{\Delta{\bf \color {red}d} } {\bf \color {red}d}\right )^{\!2}
+\left (\frac {1} {\sin\alpha\cos\alpha}\right )^{\!2} \left (\Delta{\alpha}\right )^2
\\ \ \\

&= \left (\frac{\Delta{\bf \color {red}d} } {\bf \color {red}d}\right )^{\!2}
+\left (\frac {2} {\sin(2\alpha)}\right )^{\!2} \left (\Delta{\alpha}\right )^2

\end{align*}
$$

##\ ##
 
  • Informative
Likes Juanda
  • #15
Juanda said:
Is it possible the formula for Δh is like this?
$$\Delta h = \sqrt{\left(\frac{\partial h}{\partial d} \Delta d\right)^2 + \left(\frac{\partial h}{\partial \alpha} \Delta \alpha\right)^2}$$
Yes, that is the correct expression (and wiki has it too :smile:).
( And it's where you start, not where you end up (at ##(1)##. )

The 'adding in quadrature' comes from statistics: both ##d## and ##\alpha## are supposed to be normally distributed and then ##h## is also normally distributed with variance ##\sigma_h## as outlined.

With ##\Delta {\bf\color {red}d} = 0.1{\bf\color {red}d} ## the relative error in ##h## does not depend on ##{\bf\color {red}d}## .

##\ ##
 
Last edited:
  • Like
Likes Juanda
  • #16
Thank you very much. It's much clearer now.
I calculated the absolute uncertainty which is a function of two variables.
$$\begin{align*}
\left (\Delta h\right )^2 &=\left (\frac{\partial h}{\partial {\bf \color {red} d}}\right )^{\!2} \left (\Delta{\bf\color {red} d} \right )^{\!2}

+\left (\frac{\partial h}{\partial {\alpha}}\right )^{\!2} \left (\Delta{\alpha}\right )^2\\ \ \\



&=\left (\tan\alpha\right )^2 \left (\Delta{\bf \color {red}d} \right )^2

+ \left ({\bf \color {red}d} \sec^2\alpha\right )^{\!2} \left (\Delta{\alpha}\right )^2
\end{align*}$$
So I later used ##\tan\alpha=h/d## and a preliminary guess for ##h## to get rid of one of the variables (##\alpha##) and find the ##d## where the uncertainty is the lowest for a building of a given height.
1738692469400.png


And you were working with the relative uncertainty because it conveniently gets rid of one of the variables by itself due to the chosen expressions of ##\Delta d## and ##\Delta \alpha##.
$$\begin{align*}

\left (\frac{\Delta h}{h}\right )^{\!2} &= \left (\frac{\Delta{\bf \color {red}d} } {\bf \color {red}d}\right )^{\!2}

+\left (\frac {\sec^2\alpha} {\tan\alpha}\right )^{\!2} \left (\Delta{\alpha}\right )^2

\\ \ \\



&= \left (\frac{\Delta{\bf \color {red}d} } {\bf \color {red}d}\right )^{\!2}

+\left (\frac {1} {\sin\alpha\cos\alpha}\right )^{\!2} \left (\Delta{\alpha}\right )^2

\\ \ \\



&= \left (\frac{\Delta{\bf \color {red}d} } {\bf \color {red}d}\right )^{\!2}

+\left (\frac {2} {\sin(2\alpha)}\right )^{\!2} \left (\Delta{\alpha}\right )^2



\end{align*}$$

I think I get it now. Thanks!
However, I don't see the point in trying to minimize the relative uncertainty instead of the absolute uncertainty. In this very case, it's convenient because one of the variables goes away because of term cancellation due to the definitions of ##\Delta d## and ##\Delta \alpha## but it's not a general thing. If they weren't canceled, I'd have still needed to use ##\tan\alpha=h/d## with an initial guess for ##h##. Besides, the coordinates for minimum uncertainty and minimum relative uncertainty can be different. In which scenarios is it convenient to minimize the relative uncertainty?
 

Attachments

  • 1738691530336.png
    1738691530336.png
    9.7 KB · Views: 2

Similar threads

Replies
7
Views
937
Replies
15
Views
3K
Replies
9
Views
2K
Replies
48
Views
4K
Replies
6
Views
2K
Replies
0
Views
2K
Replies
0
Views
2K
Back
Top