Create a training Run
In Experience Platform, select the Models tab located in the left navigation, then select the browse tab to view your existing Models. Find and select the hyperlink attached to the name of the Model you wish to train.
All existing training runs with their current training statuses are listed. For Models created using the Data Science Workspace user interface, a training run is automatically generated and executed using the default configurations and input training dataset.
Create a new training run by selecting Train near the top-right of the Model overview page.
Select the training input dataset for the training run, then select Next.
Default configurations provided during the Model’s creation are shown, change and modify these accordingly by double-clicking the values. Select Finish to create and execute the training run.
Evaluate the Model
In Experience Platform, select the Models tab located in the left navigation, then select the browse tab to view your existing Models. Find and select the hyperlink attached to the name of the Model you wish to evaluate.
All existing training runs with their current training statuses are listed. With multiple completed training runs, evaluation metrics can be compared across different training runs in the Model evaluation chart. Select an evaluation metric using the dropdown list above the graph.
The Mean Absolute Percent Error (MAPE) metric expresses accuracy as a percentage of the error. This is used to identify the top performing Experiment. The lower the MAPE, the better.
The “Precision” metric describes the percentage of relevant Instances compared with the total retrieved Instances. Precision can be seen as the probability that a randomly selected outcome is correct.
Selecting a specific training run provides the details of that run by opening the evaluation page. This can be done even before the run has been completed. On the evaluation page, you are able to see other evaluation metrics, configuration parameters, and visualizations specific to the training run.
You can also download activity logs to see the details of the run. Logs are particularly useful for failed runs to see what went wrong.
Hyperparameters cannot be trained and a Model must be optimized by testing different combinations of Hyperparameters. Repeat this Model training and evaluation process until you have arrived at an optimized Model.