Python Fit_transform() vs. transform()

  • Thread starter Thread starter EngWiPy
  • Start date Start date
  • Tags Tags
    Transform
Click For Summary
The discussion clarifies the difference between the methods fit_transform() and transform() in scikit-learn's PolynomialFeatures. fit_transform() combines the fitting and transforming steps into one, applying the transformation to the training data while fitting the model. In contrast, transform() is used on test data after the model has been fitted, ensuring the test data is transformed based on the learned parameters. The fit() portion of fit_transform() determines the necessary feature combinations for transformation, while transform() applies these combinations to the input data. Overall, understanding these methods is crucial for proper data preprocessing in machine learning workflows.
EngWiPy
Messages
1,361
Reaction score
61
Hi,

I noticed that in some cases we first call fit_transform(), and afterwards we call transform(). Like in the following example:

Code:
from sklearn.preprocessing import PolynomialFeatures

X_train = np.array([6, 8, 10, 14, 18]).reshape(-1, 1)
X_test = np.array([6, 8, 11, 16]).reshape(-1, 1)

quadratic_featurizer = PolynomialFeatures(degree = 2)

X_train_qudratic = quadratic_featurizer.fit_transform(X_train)
X_test_qudratic = quadratic_featurizer.transform(X_test)

Why? What is the difference between the two methods?

Thanks
 
Technology news on Phys.org
I haven't used PolynomialFeatures before but fit(), fit_transform(), and transform() are standard methods is scikit-learn.

fit_transform() is essentially the same as calling fit() and then transform() - so is like a shortcut for two commands in one if you wish.

So when you do X_train_qudratic = quadratic_featurizer.fit_transform(X_train) what you are doing is fitting quadratic_featurizer on X_train and using it to transform X_train itself. This should be equal to (and is a shorthand for):

quadratic_featurizer.fit(X_train)
X_train_qudratic = quadratic_featurizer.transform(X_train)


On the other hand, when you do X_test_qudratic = quadratic_featurizer.transform(X_test) you are using a previously fitted quadratic_featurizer to transform X_test. This should fail unless you have previously called either .fit() or .fit_transform on quadratic_featurizer.

Hope it makes sense.

I am guessing what you are trying to do is actually:
quadratic_featurizer.fit(X_train)
X_test_qudratic = quadratic_featurizer.transform(X_test)

Although what you did, e.g.:
X_train_qudratic = quadratic_featurizer.fit_transform(X_train)
X_test_qudratic = quadratic_featurizer.transform(X_test)


will also work but you are unnecessarily transforming X_train by calling fit_transform() instead of fit().
 
  • Like
Likes EngWiPy
Smile Say Hello said:
I haven't used PolynomialFeatures before but fit(), fit_transform(), and transform() are standard methods is scikit-learn.

fit_transform() is essentially the same as calling fit() and then transform() - so is like a shortcut for two commands in one if you wish.

So when you do X_train_qudratic = quadratic_featurizer.fit_transform(X_train) what you are doing is fitting quadratic_featurizer on X_train and using it to transform X_train itself. This should be equal to (and is a shorthand for):

quadratic_featurizer.fit(X_train)
X_train_qudratic = quadratic_featurizer.transform(X_train)


On the other hand, when you do X_test_qudratic = quadratic_featurizer.transform(X_test) you are using a previously fitted quadratic_featurizer to transform X_test. This should fail unless you have previously called either .fit() or .fit_transform on quadratic_featurizer.

Hope it makes sense.

I am guessing what you are trying to do is actually:
quadratic_featurizer.fit(X_train)
X_test_qudratic = quadratic_featurizer.transform(X_test)

Although what you did, e.g.:
X_train_qudratic = quadratic_featurizer.fit_transform(X_train)
X_test_qudratic = quadratic_featurizer.transform(X_test)


will also work but you are unnecessarily transforming X_train by calling fit_transform() instead of fit().

It makes sense. I did X_train_qudratic = quadratic_featurizer.fit_transform(X_train) because later in my code I use X_train_quadratic to train a model using .fit() and then test the performance of the model on X_test_quadratic.

I have one question: the method .fit_transform() fits what to the training data X_train? For example if
X_train = [1
2
3
4]
the quadratic_featurizer.fit_transform(X_train) will result in
[ 1 1 1
1 2 4
1 3 9
1 4 16]
which is basically the value of the independent variable x_1 in the polynomial equation
y = \beta_0 + \beta_1x_1 + \beta_2x_1^2

In this case .fit_transform() fits what to X_train?

Thanks
 
S_David said:
It makes sense. I did X_train_qudratic = quadratic_featurizer.fit_transform(X_train) because later in my code I use X_train_quadratic to train a model using .fit() and then test the performance of the model on X_test_quadratic.

I have one question: the method .fit_transform() fits what to the training data X_train? For example if
X_train = [1
2
3
4]
the quadratic_featurizer.fit_transform(X_train) will result in
[ 1 1 1
1 2 4
1 3 9
1 4 16]
which is basically the value of the independent variable x_1 in the polynomial equation
y = \beta_0 + \beta_1x_1 + \beta_2x_1^2

In this case .fit_transform() fits what to X_train?

Thanks
 
It looks like it is not actually fitting anything-- I think it is called fit_transform() simply because scikit-learn tries to provide a uniform interface and a lot of other modules in scikit-learn use the same terminology. What fit() and the fit part of fit_transform() seems to do is simply determine the combinations of features it needs to return for the given input shape. So when you later call transform many times, it can skip that part and simply return the values.

So, in this case the fit() part figures that it is a single feature and x^0,x^1, and x^2 need to be returned and the transform() part simply returns them for each sample on that basis.
 
  • Like
Likes EngWiPy
Smile Say Hello said:
It looks like it is not actually fitting anything-- I think it is called fit_transform() simply because scikit-learn tries to provide a uniform interface and a lot of other modules in scikit-learn use the same terminology. What fit() and the fit part of fit_transform() seems to do is simply determine the combinations of features it needs to return for the given input shape. So when you later call transform many times, it can skip that part and simply return the values.

So, in this case the fit() part figures that it is a single feature and x^0,x^1, and x^2 need to be returned and the transform() part simply returns them for each sample on that basis.

Thanks for your replies. It is more clear now.
 
Learn If you want to write code for Python Machine learning, AI Statistics/data analysis Scientific research Web application servers Some microcontrollers JavaScript/Node JS/TypeScript Web sites Web application servers C# Games (Unity) Consumer applications (Windows) Business applications C++ Games (Unreal Engine) Operating systems, device drivers Microcontrollers/embedded systems Consumer applications (Linux) Some more tips: Do not learn C++ (or any other dialect of C) as a...

Similar threads

  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
3
Views
1K
Replies
4
Views
5K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K