Monitor

Unfortunately, this page is still under development. We are working hard to bring you a completed documentation.

Evaluation

After running a pipeline, typically the first thing to do is to evaluate it. You can do this by running:

cengine pipeline evaluate <pipeline_id>

Tensorboard

Tensorboard is probably one of the best visualization tools around for evaluating machine learning trainings. The Core Engine has Tensorboard support built-in.

Tensorboard Model Analysis

‚ÄčTensorflow Model Analysis (TFMA) is another tool that lets you evaluate a single pipeline run. It not only lets you see the overall performance metrics of your pipelines, but also the slicing metrics that you specified in the Evaluator key in your config.

:::warning The TFMA visualization requires additional dependencies.

jupyter nbextension install --py --symlink tensorflow_model_analysis
jupyter nbextension enable --py tensorflow_model_analysis

Comparison

Evaluation however should go beyond individual pipeline executions. As your pipelines are already grouped together in workspaces, why not compare pipelines between each other? Direct comparison will let you judge performance and results of configuraiton against each other.

Pipelines can be compared within a workspace. To compare multiple pipelines, set the relevant workspace, and run:

cengine pipeline compare

This will open up your browser to a local web app, that will help to compare the results of different pipeline runs.

Web App Tool

Summary tab

The summary tab simply shows the names of the pipelines in the workspace.

Analysis tab

The analysis tab is the meat of the compare tool. On the left hand side, you can see the pipeline runs. On the right hand side you can see all the hyper parameters used in these pipelines. You can use the widgets to toggle different configurations to compare the pipelines across metrics and slices of data and across hyper-parameters.