Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
271 views
in Technique[技术] by (71.8m points)

Using feed_dict with Tensorflow Estimator API, "You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,784]"

I am trying to use a custom Estimator together with a feed_dict. Based on several relevant questions, such as this one, I have arrived at the following code. Note that I am returning dataset from input_fn, and not next_example, next_label. The error is

tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,784]
     [[{{node Placeholder}}]]

and the full stack trace is further down.

I am missing some fundamental concept about how where exactly how the data X and y are being fed to the graph. Can anyone shed light on what I am doing wrong? Thanks!

Code:

import numpy as np
import tensorflow as tf

class IteratorInitializerHook(tf.compat.v1.train.SessionRunHook):
    def __init__(self):
        super(IteratorInitializerHook, self).__init__()
        self.iterator_initializer_func = None # Will be set in the input_fn

    def after_create_session(self, session, coord):
        # Initialize the iterator with the data feed_dict
        self.iterator_initializer_func(session)

def get_inputs(X, y):
    iterator_initializer_hook = IteratorInitializerHook()

    def input_fn():
        X_pl = tf.compat.v1.placeholder(X.dtype, [None, X.shape[1]])
        y_pl = tf.compat.v1.placeholder(y.dtype, [None, y.shape[1]])

        dataset = tf.compat.v1.data.Dataset.from_tensor_slices((X_pl, y_pl))
        iterator = dataset.make_initializable_iterator()
        dataset = dataset.batch(32)

        iterator_initializer_hook.iterator_initializer_func = 
            lambda sess: sess.run(iterator.initializer,
                                  feed_dict={X_pl: X, y_pl: y})

        return dataset

    return input_fn, iterator_initializer_hook

class MyMnist:
    def __init__(self, params, **kwargs):
        self.loss = 0
        self.optimizer = tf.compat.v1.train.AdamOptimizer()

        self.W = tf.compat.v1.Variable(tf.zeros([784,10]), trainable=True,
                                       name="W")
        self.b = tf.compat.v1.Variable(tf.zeros([10]), trainable=True,
                                       name="b")

    def build_model(self, features, labels, mode):
        """
        Build model and return output
        """
        is_training = mode == tf.estimator.ModeKeys.TRAIN

        output = tf.compat.v1.nn.softmax(
            tf.matmul(features, self.W) + self.b
        )
        return output

    def build_total_loss(self, model_outputs, features, labels, mode):
        """
        Return computed loss
        """
        loss = tf.compat.v1.losses.softmax_cross_entropy(
            labels, model_outputs
        )
        return loss

    def build_optimizer(self):
        """
        Setup the optimizer.

        :returns: The optimizer
        """
        print("build_optimizer")
        lr = 0.01
        optimizer = tf.compat.v1.train.AdamOptimizer(
            learning_rate=lr,
            name="Adam"
        )
        return optimizer

    def build_train_ops(self, loss):
        """
        Setup optimizer and build train ops.

        :param Tensor loss: The loss tensor
        :return: Train ops
        """
        print("build_train_ops")
        self.optimizer = self.build_optimizer()
        return self.optimizer.minimize(
            loss, global_step=tf.compat.v1.train.get_global_step()
        )

def model_fn(features, labels, mode, params):
    print('model_fn')

    model = MyMnist(params)
    output = model.build_model(features, labels, mode)
    loss = model.build_total_loss(output, features, labels, mode)

    if mode == tf.estimator.ModeKeys.TRAIN:
        train_op = model.build_train_ops(loss)

    log_hook = 
        tf.compat.v1.train.LoggingTensorHook(
            {"W is": model.W, "b is": model.b},
            every_n_iter=1)

    return tf.estimator.EstimatorSpec(
        mode=mode, loss=loss, train_op=train_op,
        training_hooks=[log_hook]
    )

def train(argv=None):
    print("train")
    params = { 'mode': 'train', 'model_dir': './model_dir',
               'training': {'steps': 10 },
               'size': 100
    }

    est = tf.estimator.Estimator(
        model_fn,
        model_dir=params['model_dir'],
        params=params,
    )

    (X_train, l_train), (X_test, l_test) = tf.keras.datasets.mnist.load_data()
    y_train = np.zeros((l_train.shape[0], l_train.max()+1), dtype=np.float32)
    y_train[np.arange(l_train.shape[0]), l_train] = 1
    y_test = np.zeros((l_test.shape[0], l_test.max()+1), dtype=np.float32)
    y_test[np.arange(l_test.shape[0]), l_test] = 1

    X_train = X_train.reshape((X_train.shape[0],-1)).astype(np.float32)
    X_test = X_test.reshape((X_test.shape[0],-1))
    train_input_fn, train_iterator_initializer_hook = 
        get_inputs(X_train, y_train)
    test_input_fn, test_iterator_initializer_hook = get_inputs(X_test, y_test)

    if params['mode'] == 'train':

        est.train(
            input_fn=train_input_fn,
            hooks=[train_iterator_initializer_hook],
            steps=params['training']['steps']
        )

if __name__ == "__main__":
    tf.compat.v1.disable_eager_execution()
    tf.compat.v1.app.run(main=train)

Stack trace:

$ python main.py 
train
INFO:tensorflow:Using default config.
I0128 18:51:12.061203 139970842568512 estimator.py:1822] Using default config.
INFO:tensorflow:Using config: {'_model_dir': './model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
I0128 18:51:12.061676 139970842568512 estimator.py:191] Using config: {'_model_dir': './model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/graphsaint_env/lib/python3.8/site-packages/tensorflow/python/training/training_util.py:235: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
W0128 18:51:12.383132 139970842568512 deprecation.py:317] From /srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/graphsaint_env/lib/python3.8/site-packages/tensorflow/python/training/training_util.py:235: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From main.py:21: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_initializable_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
W0128 18:51:12.397508 139970842568512 deprecation.py:317] From main.py:21: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_initializable_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
INFO:tensorflow:Calling model_fn.
I0128 18:51:12.404662 139970842568512 estimator.py:1162] Calling model_fn.
model_fn
build_train_ops
build_optimizer
INFO:tensorflow:Done calling model_fn.
I0128 18:51:12.476285 139970842568512 estimator.py:1164] Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
I0128 18:51:12.477094 139970842568512 basic_session_run_hooks.py:546] Create CheckpointSaverHook.
data size:0.033848 MB
data size:0.033848 MB
INFO:tensorflow:Graph was finalized.
I0128 18:51:12.530334 139970842568512 monitored_session.py:246] Graph was finalized.
2021-01-28 18:51:12.530598: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-01-28 18:51:12.539575: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2400000000 Hz
2021-01-28 18:51:12.543713: I tensorflow/compiler/xla/servic

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...