Score a model in the Data Science Workspace UI

NOTE
Data Science Workspace is no longer available for purchase.
This documentation is intended for existing customers with prior entitlements to Data Science Workspace.

Scoring in Adobe Experience Platform Data Science Workspace can be achieved by feeding input data into an existing trained Model. Scoring results are then stored and viewable in a specified output dataset as a new batch.

This tutorial demonstrates the steps required to score a Model in the Data Science Workspace user interface.

Getting started

In order to complete this tutorial, you must have access to Experience Platform. If you do not have access to an organization in Experience Platform, please speak to your system administrator before proceeding.

This tutorial requires a trained Model. If you do not have a trained Model, follow the train and evaluate a Model in the UI tutorial before continuing.

Create a new scoring run

A scoring run is created using optimized configurations from a previously completed and evaluated training run. The set of optimal configurations for a Model are typically determined by reviewing training run evaluation metrics.

Find the most optimal training run to use its configurations for scoring. Then, open the desired training run by selecting the hyperlink attached to its name.

Select training run

From the training run Evaluation tab, select Score located on the top right of the screen. A new scoring workflow begins.

Select the input scoring dataset and select Next.

Select the output scoring dataset, this is the dedicated output dataset where the scoring results are stored. Confirm your selection and select Next.

The final step in the workflow prompts you to configure your scoring run. These configurations are used by the model for the scoring run.
Note that you cannot remove inherited parameters that were set during the models creation. You can edit or revert non-inherited parameters by double clicking the value or selecting the revert icon while hovering over the entry.

configuration

Review and confirm the scoring configurations and select Finish to create and execute the scoring run. You are directed to the Scoring Runs tab and the new scoring run with the Pending status is shown.

scoring runs tab

A scoring run can be displayed with one of the following statuses:

  • Pending
  • Complete
  • Failed
  • Running

Statuses are updated automatically. Proceed to the next step if the status is Complete or Failed.

View scoring results

To view scoring results, start by selecting a training run.

Select training run

You are redirected to the training runs Evaluation page. Near the top of the training run evaluation page, select the Scoring Runs tab to view a list of existing scoring runs.

evaluation page

Next, select a scoring run to view the run details.

run details

If the selected scoring run has a status of either “Complete” or “Failed”, the View Activity Logs link is made available. If a scoring run fails, the execution logs can provide useful information for determining the reason of the failure. To download the execution logs, select View Activity Logs.

Select view logs

The View activity logs popover appears. Select a URL to automatically download the associated logs.

You also have the option to view your scoring results by selecting Preview scoring results dataset.

Select preview results

A preview of the output dataset is provided.

preview results

For the complete set of scoring results, select the Scoring Results Dataset link found in the right column.

Next steps

This tutorial walked you through the steps to score data using a trained Model in Data Science Workspace. Follow the tutorial on publishing a Model as a Service in the UI to allow users within your organization to score data by providing easy access to a machine learning Service.

recommendation-more-help
cc79fe26-64da-411e-a6b9-5b650f53e4e9