How to Find Least Square Estimates for Object Weights with Normal Error?

In summary, the problem involves weighing two objects separately and then together, with three resulting observations. The scale used has normally distributed error with constant variance. The least square estimates for the weights of the two objects can be found using the formula ## \hat{\theta}=(X^TX)^{-1}X^TY##, where ##X## and ##Y## are matrices representing the measurements and observations respectively. The final answer for the estimates is given by ##\begin{bmatrix} \hat{\theta_1} \\ \hat{\theta_2} \end{bmatrix} =\begin{bmatrix} y_1-y_2 \\ \frac{-1}{2}y_1+y_2+\frac{
  • #1
bonfire09
249
0

Homework Statement


Suppose that object 1 weighs [itex]\theta_1[/itex] and object two weights [itex]\theta_2[/itex]. Then each object is weighed once and then together getting three observations [itex] y_1,y_2,y_3[/itex]. The scale gives unbiased weights with normally distributed error (constant variance) Find the least square estimates for [itex]\theta_1[/itex] and [itex]\theta_2[/itex].

Homework Equations


The ls estimates for theta 1 and theta 2 is ## \hat{\theta}=(X^TX)^{-1}X^TY##

The Attempt at a Solution


I wrote the full mode as such
[itex] y_1=\theta_1+\epsilon_1[/itex]
[itex] y_2=\theta_2+\epsilon_2[/itex]
[itex] y_3=\theta_1+\theta_2+\epsilon_3[/itex]
Then it follows that in matrix form we get ## \begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \end{bmatrix} = \begin{bmatrix} \ \theta_1 \\ \theta_2 \\ \theta_1+\theta_2 \end{bmatrix}+\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_3 \end{bmatrix}## ##=\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \end{bmatrix} = \begin{bmatrix} 1 &0 \\ 0 & 1 \\1 & 1 \end{bmatrix} \begin{bmatrix} \theta_1 \\ \theta_2 \end{bmatrix}+\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_3 \end{bmatrix}##.

Then from here I got as my final answer
## \hat{\theta}=(X^TX)^{-1}X^TY=\begin{bmatrix} \hat{\theta_1} \\ \hat{\theta_2} \end{bmatrix} =\begin{bmatrix} y_1-y_2 \\ \frac{-1}{2}y_1+y_2+\frac{1}{2}y_3 \end{bmatrix}## The thing the I am not sure about is if I wrote my full model correctly. I was thinking that the observations are just the true weight of the object with some random error. But I am not sure.
 
Last edited:
Physics news on Phys.org
  • #2
bonfire09 said:

Homework Statement


Suppose that object 1 weighs [itex]\theta_1[/itex] and object two weights [itex]\theta_2[/itex]. Then each object is weighed once and then together getting three observations [itex] y_1,y_2,y_3[/itex]. The scale gives unbiased weights with normally distributed error (constant variance) Find the least square estimates for [itex]\theta_1[/itex] and [itex]\theta_2[/itex].

Homework Equations


The ls estimates for theta 1 and theta 2 is ## \hat{\theta}=(X^TX)^{-1}X^TY##

The Attempt at a Solution


I wrote the full mode as such
[itex] y_1=\theta_1+\epsilon_1[/itex]
[itex] y_2=\theta_2+\epsilon_2[/itex]
[itex] y_3=\theta_1+\theta_2+\epsilon_3[/itex]
Then it follows that in matrix form we get ## \begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \end{bmatrix} = \begin{bmatrix} \ \theta_1 \\ \theta_2 \\ \theta_1+\theta_2 \end{bmatrix}+\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_1+\epsilon_2 \end{bmatrix}## ##=\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \end{bmatrix} = \begin{bmatrix} 1 &0 \\ 0 & 1 \\1 & 1 \end{bmatrix} \begin{bmatrix} \theta_1 \\ \theta_2 \end{bmatrix}+\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_1+\epsilon_2 \end{bmatrix}##.

Then from here I got as my final answer
## \hat{\theta}=(X^TX)^{-1}X^TY=\begin{bmatrix} \hat{\theta_1} \\ \hat{\theta_2} \end{bmatrix} =\begin{bmatrix} y_1-y_2 \\ \frac{-1}{2}y_1+y_2+\frac{1}{2}y_3 \end{bmatrix}## The thing the I am not sure about is if I wrote my full model correctly. I was thinking that the observations are just the true weight of the object with some random error. But I am not sure.

According to your model you are dealing with a very smart scale. It says to itself "I remember the errors ##\epsilon_1## and ##\epsilon_2## that I made when the guy weighed objects 1 and 2 separately. I see now that he is weighing the two objects together, so I had better add up the two errors I made before". As I said, a very smart scale indeed.
 
Last edited:
  • #3
bonfire09 said:
I was thinking that the observations are just the true weight of the object with some random error.
Right, but the random error applies to the measurements, not to the weights.
 
  • #4
Oops now I fixed that. I don't know why I placed that in the first place. So I changed ##\epsilon_1+\epsilon_2## to ##\epsilon_3##. I think it looks correct now.
 
Last edited:
  • #5
bonfire09 said:
Oops now I fixed that. I don't know why I placed that in the first place. So I changed ##\epsilon_1+\epsilon_2## to ##\epsilon_3##. I think it looks correct now.

Yes, I would say so.
 

1. What is least squares estimation?

Least squares estimation is a statistical method used to find the best-fit line or curve for a set of data points. It minimizes the sum of the squared differences between the observed data and the predicted values from the model.

2. How is least squares estimation used in scientific research?

Least squares estimation is commonly used in scientific research to analyze and model data. It is used to determine the relationship between two or more variables and to make predictions based on the observed data.

3. What are the assumptions of least squares estimation?

The main assumptions of least squares estimation are that the errors in the data are normally distributed, the relationship between the variables is linear, and the errors are independent of each other. It is also assumed that the errors have equal variance.

4. How is least squares estimation different from other regression methods?

Unlike other regression methods, least squares estimation minimizes the sum of the squared errors, rather than just the absolute errors. This means that outliers or extreme data points have a greater impact on the model. Additionally, least squares estimation assumes that the errors are normally distributed, while some other methods do not have this assumption.

5. What is the difference between ordinary least squares and weighted least squares estimation?

Ordinary least squares (OLS) estimation treats all data points equally and minimizes the sum of the squared errors. Weighted least squares (WLS) estimation, on the other hand, assigns weights to different data points based on their importance and minimizes the weighted sum of the squared errors. WLS is often used when the data has unequal variance or when certain data points are more reliable than others.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Replies
6
Views
2K
  • Differential Equations
Replies
2
Views
2K
  • Programming and Computer Science
Replies
31
Views
2K
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Back
Top