# Load model in TensorFlow gives a different result than the original one

Homework Helper
TL;DR Summary
I'm using the TensorFlow library in Python. After creating a model and saving it, if I load the entire model, I get inconsistent results.
First of all, I'm using TensorFlow version 2.3.0
The code I'm using is the following:
Python:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint

def get_new_model():
model = Sequential([
Conv2D(filters=16, input_shape=(32, 32, 3), kernel_size=(3, 3), activation='relu', name='conv_1'),
Conv2D(filters=8, kernel_size=(3, 3), activation='relu', name='conv_2'),
MaxPooling2D(pool_size=(4, 4), name='pool_1'),
Flatten(name='flatten'),
Dense(units=32, activation='relu', name='dense_1'),
Dense(units=10, activation='softmax', name='dense_2')
])
return model

checkpoint_path = 'model_checkpoints'
checkpoint = ModelCheckpoint(filepath=checkpoint_path, save_weights_only=False, frequency='epoch', verbose=1)

model = get_new_model()
model.fit(x_train, y_train, epochs=3, callbacks=[checkpoint])
Until here no problem, I create the model, compile it and train it with some data. I also use ModelCheckpoint to save the model.
The problem comes when I try the following
Python:
from tensorflow.keras.models import load_model

model.evaluate(x_test, y_test)
model2.evaluate(x_test, y_test)
Then, the first evaluation returns an accuracy of 0.477, while the other returns an accuracy of 0.128, which is essentially a random choice.
Where's the error? The two models are supposed to be identical, and actually, they give the same value for the loss function up to 16 decimal places.

lomidrevo
I don't have any experience with checkpoints in TF, but maybe you could try to save a complete model using this guide:
https://www.tensorflow.org/guide/saved_model

In the checkpoint guide, they state:
The phrase "Saving a TensorFlow model" typically means one of two things:
Checkpoints, OR
SavedModel.

Checkpoints capture the exact value of all parameters (tf.Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.

The SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).

Homework Helper
Yes, my problem is that this code should work theoretically. So either I'm doing something wrong, which is mainly what I'm interested to know. Or maybe there's some bug in TensorFlow? Because I checked and the weights of the two models are the same, I don't know what else affects the evaluate function but as far as I know, if I have two models with the same architecture and same weights, they should perform equally on the same data.

Homework Helper
Gold Member
Have you tried creating the checkpoint file after you have trained the model?

Homework Helper
I have tried to define the checkpoint after creating the model, doesn't help. I can't define it after the fit method (which is the one that trains the model) because the checkpoint is an argument to that method.
But that is not the problem, even if I ignore the callback and save the model manually using the 'save' method after the training, I still have the same problem.