How Do You Optimize a and b in Linear Regression for Minimal Deviation?

Click For Summary
SUMMARY

This discussion focuses on optimizing the parameters 'a' and 'b' in the linear regression equation y = ax + b to minimize deviations from experimental data points. The two specific objectives are to minimize the sum of absolute deviations and to minimize the maximum deviation. The Ordinary Least Squares (OLS) method is identified as a suitable approach for calculating the optimal values of 'a' and 'b', using the formulas: b = Σ(Xi - X̄)(Yi - Ȳ) / Σ(Xi - X̄)² and a = Ȳ - bX̄, where X̄ and Ȳ represent the averages of the x and y data sets, respectively.

PREREQUISITES
  • Understanding of linear regression concepts
  • Familiarity with Ordinary Least Squares (OLS) method
  • Basic knowledge of statistical deviation measures
  • Ability to compute averages and sums in data sets
NEXT STEPS
  • Learn about the Ordinary Least Squares (OLS) method in detail
  • Explore techniques for minimizing absolute deviations in regression analysis
  • Study the concept of maximum deviation and its implications in regression
  • Investigate software tools for performing linear regression analysis, such as Python's NumPy and SciPy libraries
USEFUL FOR

Students and professionals in statistics, data analysis, and machine learning who are looking to understand and apply linear regression techniques for data modeling and optimization.

oleandora
Messages
2
Reaction score
0

Homework Statement



I've been given a set of data
x 0 0.5 0.7 1.5 1.75
y 0.5 0.72 0.51 1.5 1.63

Given y=ax+b

for this data points of linear model, I have to
1. minimize the sum of the absolute values of deviations between experimental value of Y and value predicted by the linear relation
2. minimiza the maximum value of deviation between all experimental values of Y and value predicted by linear relation

The question is
What are the optimal values of a and b for both cases and values of objective function.

Homework Equations


The Attempt at a Solution


What I know is
y=ax+b+e where e is the deviation but that's it. I don't even have the slightest idea for the next steps.
I know seems simplebut I have very little background on linear regression.
Help! T_T
 
Physics news on Phys.org
Hey, this is the first time I tried to help someone on this forum, so don't shoot me if I am wronge. If we are at the same level I guess you are supposed to use OLS Ordinary Least Square.

b = \sum (Xi-\bar{X})(Yi-\bar{Y})/
\sum (Xi-\bar{X})2

a = \bar{Y} - b\bar{X}

where you sum from 1 to n.
\bar{X} means the average
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 64 ·
3
Replies
64
Views
6K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
Replies
4
Views
1K