Deploy Model

Deployment using TF Serving

Your downloaded model is a Tensorflow Saved Model. Therefore, it is easy to deploy the model using any mechanism that Tensorflow Serving supports.

Deploy locally using TF Serving

apt-get install tensorflow-model-server

Point an environment variable to the model dir (Path that you downloaded the model on using cengine pipeline model)

os.environ["MODEL_DIR"] = MODEL_DIR

Deploy the model:

tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1

See the logs at:

tail server.log

Deploy using TF Serving and Docker

It is even easier to use TF Serving with docker. Same as before but with fewer commands:

# Download the TensorFlow Serving Docker image and repo
docker pull tensorflow/serving
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$MODEL_DIR:/models/name_of_model" \
-e MODEL_NAME=name_of_model \
tensorflow/serving &

How to make a request to your served model

TFServing has defined standards on how to communicate with a model being served using TF Serving.

A good example to request predictions from TFServing models can be found here.

Deployment on Cloud Endpoint

We are working hard to bring automatic cloud deployment of your models to the cengine. Please see our roadmap for an indication on when this feature will be released.