python 3.x - Tensorflow OutOfRangeError on tf.train.shuffle_batch won't get fixed -
i'm stuck on error , can't figure out how fix it. have read many similar questions on stack overflow none of suggested fixes helped me on problem.
i outofrangeerror on randomshufflequeue when try reconstruct image data saved .tfrecord file.
my reconstruction-code looks this:
image_width = 640 image_height = 480 tfrecords_filename = 'records/train_record_000.tfrecord' def read_and_decode(filename_queue): reader = tf.tfrecordreader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example( serialized_example, features={ 'height': tf.fixedlenfeature([], tf.int64), 'width': tf.fixedlenfeature([], tf.int64), 'image_raw': tf.fixedlenfeature([], tf.string), 'bbox-label_text': tf.fixedlenfeature([], tf.string) }) image = tf.decode_raw(features['image_raw'], tf.uint8) annotation = tf.decode_raw(features['bbox-label_text'], tf.uint8) height = tf.cast(features['height'], tf.int32) width = tf.cast(features['width'], tf.int32) image_shape = tf.stack([height, width, 3]) annotation_shape = tf.stack([height, width, 1]) image = tf.reshape(image, image_shape) annotation = tf.reshape(annotation, annotation_shape) image_size_const = tf.constant((image_height, image_width, 3), dtype=tf.int32) annotation_size_const = tf.constant((image_height, image_width, 1), dtype=tf.int32) resized_image = tf.image.resize_image_with_crop_or_pad(image=image, target_height=image_height, target_width=image_width) resized_annotation = tf.image.resize_image_with_crop_or_pad(image=annotation, target_height=image_height, target_width=image_width) images, annotations = tf.train.shuffle_batch([resized_image, resized_annotation], batch_size=4, capacity=200, num_threads=2, min_after_dequeue=4) return images, annotations
and run code:
if __name__ == '__main__': filename_queue = tf.train.string_input_producer([tfrecords_filename]) image, annotation = read_and_decode(filename_queue) init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) tf.session() sess: sess.run(init_op) coord = tf.train.coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) in range(124): print('iteration {}/124'.format(i)) img, anno = sess.run([image, annotation]) print(img[0, :, :, :].shape) print('current batch') coord.request_stop() coord.join(threads)
i trying on small dataset because want , running, therefore have 124 examples in .tfrecord file.
the error message follows:
2017-07-28 11:22:22.244499: w tensorflow/core/kernels/queue_base.cc:294] _0_input_producer: skipping cancelled enqueue attempt queue not closed traceback (most recent call last): file "/usr/lib/python3.6/site- packages/tensorflow/python/client/session.py", line 1139, in _do_call return fn(*args) file "/usr/lib/python3.6/site- packages/tensorflow/python/client/session.py", line 1121, in _run_fn status, run_metadata) file "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) file "/usr/lib/python3.6/site- packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.tf_getcode(status)) tensorflow.python.framework.errors_impl.outofrangeerror: randomshufflequeue '_1_shuffle_batch/random_shuffle_queue' closed , has insufficient elements (requested 4, current size 0) [[node: shuffle_batch = queuedequeuemanyv2[component_types= [dt_uint8, dt_uint8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"] (shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
this happens on first iteration.
Comments
Post a Comment