How to properly serve an object detection model from Tensorflow Object Detection API? -


i using tensorflow object detection api(github.com/tensorflow/models/tree/master/object_detection) 1 object detection task. right having problem on serving detection model trained tensorflow serving(tensorflow.github.io/serving/).

1. first issue encountering exporting model servable files. object detection api kindly included export script able convert ckpt files pb files variables. however, output files not have content in 'variables' folder. though bug , reported on github, seems interned convert variables constants there no variables. detail can found here.

the flags using when exporting saved model follows:

    cuda_visible_devices=0 python export_inference_graph.py \         --input_type image_tensor \             --pipeline_config_path configs/rfcn_resnet50_car_jul_20.config \                 --checkpoint_path resnet_ckpt/model.ckpt-17586 \                     --inference_graph_path serving_model/1 \                       --export_as_saved_model true 

it runs fine in python when switch --export_as_saved_model false.

but still, having issue serving model.

when trying run:

~/serving$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=gan --model_base_path=<my_model_path> 

i got:

2017-07-27 16:11:53.222439: external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] restoring savedmodel bundle. 2017-07-27 16:11:53.222497: external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:165] specified savedmodel has no variables; no checkpoints restored. 2017-07-27 16:11:53.222502: external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] running legacyinitop on savedmodel bundle. 2017-07-27 16:11:53.229463: external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] loading savedmodel: success. took 281805 microseconds. 2017-07-27 16:11:53.229508: tensorflow_serving/core/loader_harness.cc:86] loaded servable version {name: gan version: 1} 2017-07-27 16:11:53.244716: tensorflow_serving/model_servers/main.cc:290] running modelserver @ 0.0.0.0:9000 ... 

i think model not loaded since shows "the specified savedmodel has no variables; no checkpoints restored."

but since have converted variables constants, seems reasonable. not sure here.

2. not able use client call server , detection on sample image.

the client scrip listed below:

from __future__ import print_function __future__ import absolute_import  # communication tensorflow server via grpc grpc.beta import implementations import tensorflow tf import numpy np pil import image # tensorflow serving stuff send messages tensorflow_serving.apis import predict_pb2 tensorflow_serving.apis import prediction_service_pb2   # command line arguments tf.app.flags.define_string('server', 'localhost:9000',                        'predictionservice host:port') tf.app.flags.define_string('image', '', 'path image in jpeg format') flags = tf.app.flags.flags   def load_image_into_numpy_array(image):     (im_width, im_height) = image.size     return np.array(image.getdata()).reshape(     (im_height, im_width, 3)).astype(np.uint8)  def main(_):     host, port = flags.server.split(':')     channel = implementations.insecure_channel(host, int(port))     stub = prediction_service_pb2.beta_create_predictionservice_stub(channel)      # send request     request = predict_pb2.predictrequest()     image = image.open(flags.image)     image_np = load_image_into_numpy_array(image)     image_np_expanded = np.expand_dims(image_np, axis=0)     # call gan model make prediction on image     request.model_spec.name = 'gan'     request.model_spec.signature_name = 'predict_images'     request.inputs['inputs'].copyfrom(     tf.contrib.util.make_tensor_proto(image_np_expanded))      result = stub.predict(request, 60.0)  # 60 secs timeout     print(result)   if __name__ == '__main__':     tf.app.run() 

to match request.model_spec.signature_name = 'predict_images', modified exporter.py script in object detection api (github.com/tensorflow/models/blob/master/object_detection/exporter.py) started @ line 289 from:

          signature_def_map={           signature_constants.default_serving_signature_def_key:               detection_signature,       }, 

to:

          signature_def_map={           'predict_images': detection_signature,           signature_constants.default_serving_signature_def_key:               detection_signature,       }, 

since have no idea how call default signature key.

when run following command:

bazel-bin/tensorflow_serving/example/client --server=localhost:9000 --image=<my_image_file> 

i got following error message:

    traceback (most recent call last):   file "/home/xinyao/serving/bazel-bin/tensorflow_serving/example/client.runfiles/tf_serving/tensorflow_serving/example/client.py", line 54, in <module>     tf.app.run()   file "/home/xinyao/serving/bazel-bin/tensorflow_serving/example/client.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 48, in run     _sys.exit(main(_sys.argv[:1] + flags_passthrough))   file "/home/xinyao/serving/bazel-bin/tensorflow_serving/example/client.runfiles/tf_serving/tensorflow_serving/example/client.py", line 49, in main     result = stub.predict(request, 60.0)  # 60 secs timeout   file "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 324, in __call__     self._request_serializer, self._response_deserializer)   file "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 210, in _blocking_unary_unary     raise _abortion_error(rpc_error_call) grpc.framework.interfaces.face.face.abortionerror: abortionerror(code=statuscode.not_found, details="feedinputs: unable find feed output tofloat:0") 

not quite sure what's going on here.

initially though maybe client script not correct, after found abortionerror github.com/tensorflow/tensorflow/blob/f488419cd6d9256b25ba25cbe736097dfeee79f9/tensorflow/core/graph/subgraph.cc. seems got error when building graph. might caused first issue have.

i new stuff, confused. think might wrong @ start. there way export , serve detection model? suggestions of great help!

the current exporter code doesn't populate signature field properly. serving using model server doesn't work. apologies that. new version better support exporting model coming. includes important fixes , improvements needed serving, serving on cloud ml engine. see github issue if want try version of it.

for "the specified savedmodel has no variables; no checkpoints restored." message, expected due exact reason said, variables converted constants in graph. error of "feedinputs: unable find feed output tofloat:0", make sure use tf 1.2 when building model server.


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -