Integration with other Experience Platform services

Standardization and interoperability are key concepts behind Experience Platform. The integration of JupyterLab on Experience Platform as an embedded IDE allows it to interact with other Experience Platform services, enabling you to utilize Experience Platform to its full potential. The following Experience Platform services are available in JupyterLab:

  • Catalog Service: Access and explore datasets with read and write functionalities.
  • Query Service: Access and explore datasets using SQL, providing lower data access overheads when dealing with large amounts of data.
  • Sensei ML Framework: Model development with the ability to train and score data, as well as recipe creation with a single click.
  • Experience Data Model (XDM): Standardization and interoperability are key concepts behind Adobe Experience Platform. Experience Data Model (XDM), driven by Adobe, is an effort to standardize customer experience data and define schemas for customer experience management.
NOTE
Some Experience Platform service integrations on JupyterLab are limited to specific kernels. Refer to the section on kernels for more details.

Key features and common operations

Information regarding key features of JupyterLab and instructions on performing common operations are provided in the sections below:

Access JupyterLab

In Adobe Experience Platform, select Notebooks from the left navigation column. Allow some time for JupyterLab to fully initialize.

JupyterLab interface

The JupyterLab interface consists of a menu bar, a collapsible left sidebar, and the main work area containing tabs of documents and activities.

Menu bar

The menu bar at the top of the interface has top-level menus that expose actions available in JupyterLab with their keyboard shortcuts:

  • File: Actions related to files and directories
  • Edit: Actions related to editing documents and other activities
  • View: Actions that alter the appearance of JupyterLab
  • Run: Actions for running code in different activities such as notebooks and code consoles
  • Kernel: Actions for managing kernels
  • Tabs: A list of open documents and activities
  • Settings: Common settings and an advanced settings editor
  • Help: A list of JupyterLab and kernel help links

Left sidebar

The left sidebar contains clickable tabs that provide access to the following features:

  • File browser: A list of saved notebook documents and directories
  • Data explorer: Browse, access, and explore datasets and schemas
  • Running kernels and terminals: A list of active kernel and terminal sessions with the ability to terminate
  • Commands: A list of useful commands
  • Cell inspector: A cell editor that provides access to tools and metadata useful for setting up a notebook for presentation purposes
  • tabs: A list of open tabs

Select a tab to expose its features, or select on an expanded tab to collapse the left sidebar as demonstrated below:

Main work area

The main work area in JupyterLab enables you to arrange documents and other activities into panels of tabs that can be resized or subdivided. Drag a tab to the center of a tab panel to migrate the tab. Divide a panel by dragging a tab to the left, right, top, or bottom of the panel:

GPU and memory server configuration in Python/R

In JupyterLab select the gear icon in the top-right corner to open Notebook server configuration. You can toggle GPU on and allocate the amount of memory you need by using the slider. The amount of memory you can allocate depends on how much your organization has provisioned. Select Update configs to save.

NOTE
Only one GPU is provisioned per organization for Notebooks. If the GPU is in use, you need to wait for the user that has currently reserved the GPU to release it. This can be done by logging out or leaving the GPU in an idle state for four or more hours.

Terminate & restart JupyterLab

In JupyterLab, you can terminate your session to prevent further resources from being used. Start by selecting the power icon power icon , then select Shut Down from the popover that appears to terminate your session. Notebook sessions auto-terminate after 12 hours of no activity.

To restart JupyterLab, select the restart icon restart icon located directly to the left of the power icon, then select Restart from the popover that appears.

terminate jupyterlab

Code cells

Code cells are the primary content of notebooks. They contain source code in the language of the notebook’s associated kernel and the output as a result of executing the code cell. An execution count is displayed to the right of every code cell which represents its order of execution.

Common cell actions are described below:

  • Add a cell: Click the plus symbol (+) from the notebook menu to add an empty cell. New cells are placed under the cell that is currently being interacted with, or at the end of the notebook if no particular cell is in focus.

  • Move a cell: Place your cursor to the right of the cell you wish to move, then click and drag the cell to a new location. Additionally, moving a cell from one notebook to another replicates the cell along with its contents.

  • Execute a cell: Click on the body of the cell you wish to execute and then click the play icon () from the notebook menu. An asterisk (*) is displayed in the cell’s execution counter when the kernel is processing the execution, and is replaced with an integer upon completion.

  • Delete a cell: Click on the body of the cell you wish to delete and then click the scissor icon.

Kernels

Notebook kernels are the language-specific computing engines for processing notebook cells. In addition to Python, JupyterLab provides additional language support in R, PySpark, and Spark (Scala). When you open a notebook document, the associated kernel is launched. When a notebook cell is executed, the kernel performs the computation and produces results which may consume significant CPU and memory resources. Note that allocated memory is not freed until the kernel is shut down.

Certain features and functionalities are limited to particular kernels as described in the table below:

KernelLibrary installation supportExperience Platform integrations
PythonYes
  • Sensei ML Framework
  • Catalog Service
  • Query Service
RYes
  • Sensei ML Framework
  • Catalog Service
ScalaNo
  • Sensei ML Framework
  • Catalog Service

Kernel sessions

Each active notebook or activity on JupyterLab utilizes a kernel session. All active sessions can be found by expanding the Running terminals and kernels tab from the left sidebar. The type and state of the kernel for a notebook can be identified by observing the top right of the notebook interface. In the diagram below, the notebook’s associated kernel is Python 3 and the its current state is represented by a grey circle to the right. A hollow circle implies an idling kernel and a solid circle implies a busy kernel.

If the kernel is shut down or inactive for a prolonged period, then No Kernel! with a solid circle is shown. Activate a kernel by clicking the kernel status and selecting the appropriate kernel type as demonstrated below:

Launcher

The customized Launcher provides you with useful notebook templates for their supported kernels to help you kickstart your task, including:

TemplateDescription
BlankAn empty notebook file.
StarterA pre-filled notebook demonstrating data exploration using sample data.
Retail SalesA pre-filled notebook featuring the retail sales recipe using sample data.
Recipe BuilderA notebook template for creating a recipe in JupyterLab. It is pre-filled with code and commentary that demonstrates and describes the recipe creation process. Refer to the notebook to recipe tutorial for a detailed walkthrough.
Query ServiceA pre-filled notebook demonstrating the usage of Query Service directly in JupyterLab with provided sample workflows that analyzes data at scale.
XDM EventsA pre-filled notebook demonstrating data exploration on postvalue Experience Event data, focusing on features common across the data structure.
XDM QueriesA pre-filled notebook demonstrating sample business queries on Experience Event data.
AggregationA pre-filled notebook demonstrating sample workflows to aggregate large amounts of data into smaller, manageable chunks.
ClusteringA pre-filled notebook demonstrating the end-to-end machine learning modeling process using clustering algorithms.

Some notebook templates are limited to certain kernels. Template availability for each kernel is mapped in the following table:

BlankStarterRetail SalesRecipe BuilderQuery ServiceXDM EventsXDM QueriesAggregationClustering
Pythonyesyesyesyesyesyesnonono
Ryesyesyesnononononono
PySpark 3 (Spark 2.4)noyesnonononoyesyesno
Scalayesyesnonononononoyes

To open a new Launcher, click File > New Launcher. Alternatively, expand the File browser from the left sidebar and click the plus symbol (+):