gcloud - tensorflow serving prediction not working with object detection pets example -


i trying predictions on gcloud ml-engine tensorflow object detection pets example, doesn't work.

i created checkpoint using example: https://github.com/tensorflow/models/blob/master/object_detection/g3doc/running_pets.md

with of tensorflow team, able create saved_model upload gcloud ml-engine: https://github.com/tensorflow/models/issues/1811

now, can upload model gcloud ml-engine. unfortunately, i'm not able correct prediction requests model. everytime try prediction, same error:

input instances not in json format. 

i trying online predictions with

gcloud ml-engine predict --model od_test --version v1 --json-instances prediction_test.json 

and trying batch predictions with

gcloud ml-engine jobs submit prediction "prediction7"      --model od_test      --version v1      --data-format text      --input-paths gs://ml_engine_test1/prediction_test.json      --output-path gs://ml_engine_test1/prediction_output      --region europe-west1 

i want submit list of images unit8-matrices, export using input type image_tensor.

as stated in documentation here: https://cloud.google.com/ml-engine/docs/concepts/prediction-overview#prediction_input_data, input json should have particular format. nether format online predictions, nor format batch predictions working. latest tests single file content:

{"instances": [{"values": [1, 2, 3, 4], "key": 1}]} 

and content:

{"images": [0.0, 0.3, 0.1], "key": 3} {"images": [0.0, 0.7, 0.1], "key": 2} 

none of them working. can me, how input format should be?

edit

the error batch processing is

{     insertid:  "1a26yhdg2wpxvg6"        jsonpayload: {         @type:  "type.googleapis.com/google.cloud.ml.api.v1beta1.predictionlogentry"             error_detail: {             detail:  "no json object decoded"                  input_snippet:  "input snippet unavailable."              }         message:  "no json object decoded"         }     logname:  "projects/tensorflow-test-1-168615/logs/worker"        payload: {         @type:  "type.googleapis.com/google.cloud.ml.api.v1beta1.predictionlogentry"             error_detail: {             detail:  "no json object decoded"                  input_snippet:  "input snippet unavailable."              }         message:  "no json object decoded"         }     receivetimestamp:  "2017-07-28t12:31:23.377623911z"        resource: {         labels: {             job_id:  "prediction10"                  project_id:  "tensorflow-test-1-168615"                  task_name:  ""              }         type:  "ml_job"         }     severity:  "error"        timestamp:  "2017-07-28t12:31:23.377623911z"    } 

the model exported accepts input following prediction if use gcloud submit requests gcloud ml-engine local predict batch prediction.

{"inputs": [[[242, 240, 239], [242, 240, 239], [242, 240, 239], [242, 240, 239], [242, 240, 23]]]} {"inputs": [[[232, 242, 219], [242, 240, 239], [242, 240, 239], [242, 242, 239], [242, 240, 123]]]} ... 

if you're sending requests directly service (i.e., not using gcloud), body of request like:

{"instances": [{"inputs": [[[242, 240, 239], [242, 240, 239], [242, 240, 239], [242, 240, 239], [242, 240, 23]]]}]} {"instances": [{"inputs": [[[232, 242, 219], [242, 240, 239], [242, 240, 239], [242, 242, 239], [242, 240, 123]]]}]} 

the input tensor name should "inputs" because what we've specified in signature.inputs.the value of each json object 3-d array can tell here. outer dimension none support batched-input. no "instances" needed (unless use http api directly). note cannot specify "key" in input unless modify graph include placeholder , output untouched using tf.identity.

also mentioned in the github issue, online service may not work due large memory model requires. working on that.


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -