Deployment using TF Serving
Deploy locally using TF Serving
Point an environment variable to the model dir (Path that you downloaded the model on using
cengine pipeline model)
Deploy the model:
See the logs at:
Deploy using TF Serving and Docker
It is even easier to use TF Serving with docker. Same as before but with fewer commands:
How to make a request to your served model
TFServing has defined standards on how to communicate with a model being served using TF Serving.
A good example to request predictions from TFServing models can be found here.
Deployment on Cloud Endpoint
We are working hard to bring automatic cloud deployment of your models to the cengine. Please see our roadmap for an indication on when this feature will be released.