Training

When it comes to any ML workflow, one of the most critical steps is to design the model architecture. Similar to the previous steps, the Core Engine handles it through the configuration file through a main block called trainer.

Main block: trainer

Structurally, it includes:

Attributes

Description

Required

fn

string, which identifies the type of

the model to use for the training process

True

params

a set of key, value pairs for the model parameters

True

Examples

The example below displays how to use one of the built-in models in the Core Engine, namely an autoencoder. Per its definition, it is possible to adjust such a model by using hyperparameters like train_steps, eval_steps, learning_rate or loss.

Python SDK
YAML
Python SDK
from cengine import Method
from cengine import PipelineConfig
p = PipelineConfig()
#
p.trainer.fn = 'autoencoder'
p.trainer.param = {'train_steps': 300,
'eval_steps': 200,
'learning_rate': 0.015,
'loss': 'mean_squared_error'}
YAML
trainer:
fn: autoencoder
params:
train_steps: 300
eval_steps: 200
learning_rate: 0.015
loss: mean_squared_error

Moreover, on Core Engine, you can use the same format to use your own custom model with your own architecture and set of hyperparameters.

Python SDK
YAML
Python SDK
from cengine import Method
from cengine import PipelineConfig
from my.module import my_model # a custom model function
p = PipelineConfig()
# You can either point to a already registered custom model
p.trainer.fn = 'custom_model@version'
p.trainer.param = {'custom_hparam_1': 30,
'custom_hparam_2': 20}
# Or, even define it on the go over a callable
p.trainer = Trainer.from_callable(client=client,
fn=my_model,
params={'custom_hparam_1': 30,
'custom_hparam_2': 20}
YAML
trainer:
fn: custom_model@version
params:
custom_hparam_1: 30
custom_hparam_2: 20

You can learn how to integrate your own model to the Core Engine here.