Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
177 views
in Technique[技术] by (71.8m points)

python - LSTM Model not having any variance during evaluation

I have a question regarding the evaluation of an LSTM Model. I have trained an LSTM Model and stored it with model.save(...). Now I want load_model and evaluate it on the validation set datasets. Since neural networks are stochastic, I run it several times and compute the mean and the variance of the different metrics I am interested in. Now I am shocked that after the first run all consecutive runs have the same performance on every metric. I don't think that is right, but I don't know where the error occurs. So my question is: what is my mistake in setting up the validation of my model? and how can I fix that?

Here are the code snippets that should explain what I am doing:

Compile and fit the Model

def compile_and_fit(  hparams, 
                      MAX_EPOCHS, 
                      model_path ):
  
  
  window = WindowGenerator( input_width= hparams[HP_WINDOW_SIZE], 
                            label_width=hparams[HP_WINDOW_SIZE], shift=1,
                            label_columns=['q_MARI'], batch_size = hparams[HP_BATCH_SIZE])
  
  model =  tf.keras.models.Sequential([
    tf.keras.layers.LSTM(hparams[HP_NUM_UNITS], return_sequences=True,  name="LSTM_1"),
    tf.keras.layers.Dropout(hparams[HP_DROPOUT], name="Dropout_1"),
    tf.keras.layers.LSTM(hparams[HP_NUM_UNITS], return_sequences=True, name="LSTM_2"),
   tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(1))
])
  
  learning_rate = hparams[HP_LEARNING_RATE]
  model.compile(loss=tf.losses.MeanSquaredError(),
             optimizer=tf.optimizers.Adam(learning_rate=learning_rate),
                metrics=get_metrics())

  history = model.fit(window.train, 
                      epochs=MAX_EPOCHS,
                      validation_data=window.val,
                      callbacks= get_callbacks(model_path))
  
  _, a,_,_,_,_ = model.evaluate(window.val)

  return a, model, history

Train and safe it

a, model, history =  compile_and_fit( hparams = hparams,                                                     MAX_EPOCHS = MAX_EPOCHS, model_path = run_path)
model.save(run_path)

Load and evaluate it

model = tf.keras.models.load_model(os.path.join(hparam_path, model_name),
      custom_objects={"max_error": max_error, "median_absolute_error": median_absolute_error, "rev_metric": rev_metric, "nse_metric": nse_metric})
model.compile(loss=tf.losses.MeanSquaredError(), optimizer="adam", metrics=get_metrics())
    metric_values = np.empty(shape = (nr_runs, len(metrics)), dtype=float)
for j in range(nr_runs):
  window = WindowGenerator(input_width= hparam_vals[i], label_width=hparam_vals[i], shift=1,
                      label_columns=['q_MARI'])
  metric_values[j]= np.array(model.evaluate(window.val))
means = metric_values.mean(axis=0)
varis = metric_values.var(axis=0)
print(f'means: {means}, varis: {varis}')

The results I am getting screenshots of the evaluation results

For setting up the Training I follow those two guides: https://www.tensorflow.org/tutorials/structured_data/time_series https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams

question from:https://stackoverflow.com/questions/65937266/lstm-model-not-having-any-variance-during-evaluation

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

LSTM is not stochastic. Evaluation results should be the same for the same data.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...