Final piece of the puzzle...

Downloading a trained model

In order to download your model to a specified directory as a Tensorflow Saved Model:

cengine pipeline model PIPELINE_ID --output_path /path/to/empty/dir/

Tensorflow Version

All models are trained and exported with Tensorflow 2.1.0.

What about the preprocessing?

All the preprocessing that you defined in the preprocessing key is natively embedded into the model itself (with the exception of sequential transformations - see below). This means that the SavedModel itself can handle completely raw data by itself.

For now, sequential preprocessing like forward_filling and anything defined in the timeseries key is NOT embedded into the model. To serve these models in production, please ensure that the data is preprocessed with the same steps outlined in the pipeline.

Deployment using TF Serving

Your downloaded model is a Tensorflow Saved Model. Therefore, it is easy to deploy the model using any mechanism that Tensorflow Serving supports.

Deploy locally using TF Serving

apt-get install tensorflow-model-server

Point an environment variable to the model dir (Path that you downloaded the model on using cengine pipeline model)

os.environ["MODEL_DIR"] = MODEL_DIR

Deploy the model:

tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1

See the logs at:

tail server.log

Deploy using TF Serving and Docker

It is even easier to use TF Serving with docker. Same as before but with fewer commands:

# Download the TensorFlow Serving Docker image and repo
docker pull tensorflow/serving
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$MODEL_DIR:/models/name_of_model" \
-e MODEL_NAME=name_of_model \
tensorflow/serving &

How to make a request to your served model

TFServing has defined standards on how to communicate with a model being served using TF Serving.

A good example to request predictions from TFServing models can be found here.

Deployment on Cloud Endpoint

We are working hard to bring automatic cloud deployment of your models to the Core Engine. Please check our roadmap for an indication of when this feature will be released.