创建数据流

最后一步是在源连接中指定的数据集和目标连接中指定的目标文件路径之间创建数据流。

每个可用的云存储类型都由一个流量规范ID标识:

云存储类型流量规范ID
Amazon S3269ba276-16fc-47db-92b0-c1049a3c131f
Azure Blob Storage95bd8965-fc8a-4119-b9c3-944c2c2df6d2
Azure数据湖17be2013-2549-41ce-96e7-a70363bec293
数据登陆区cd2fc47e-e838-4f38-a581-8fff2f99b63a
Google 云存储585c15c4-6cbf-4126-8f87-e26bff78b657
SFTP354d6aad-4754-46e4-a576-1b384561c440

以下代码创建一个数据流,其中计划设置为在未来的很长时间开始。 这允许您在模型开发期间触发临时流。 获得经过训练的模型后,您可以更新数据流的计划,以按所需的计划共享功能数据集。

import time

on_schedule = False
if on_schedule:
    schedule_params = {
        "interval": 3,
        "timeUnit": "hour",
        "startTime": int(time.time())
    }
else:
    schedule_params = {
        "interval": 1,
        "timeUnit": "day",
        "startTime": int(time.time() + 60*60*24*365) # Start the schedule far in the future
    }

flow_spec_id = "cd2fc47e-e838-4f38-a581-8fff2f99b63a"
flow_obj = {
    "name": "Flow for Feature Dataset to DLZ",
    "flowSpec": {
        "id": flow_spec_id,
        "version": "1.0"
    },
    "sourceConnectionIds": [
        source_connection_id
    ],
    "targetConnectionIds": [
        target_connection_id
    ],
    "transformations": [],
    "scheduleParams": schedule_params
}
flow_res = flow_conn.createFlow(
    obj = flow_obj,
    flow_spec_id = flow_spec_id
)
dataflow_id = flow_res["id"]

创建数据流后,您现在可以触发临时流运行以按需共享功能数据集:

from aepp import connector

connector = connector.AdobeRequest(
  config_object=aepp.config.config_object,
  header=aepp.config.header,
  loggingEnabled=False,
  logger=None,
)

endpoint = aepp.config.endpoints["global"] + "/data/core/activation/disflowprovider/adhocrun"

payload = {
    "activationInfo": {
        "destinations": [
            {
                "flowId": dataflow_id,
                "datasets": [
                    {"id": created_dataset_id}
                ]
            }
        ]
    }
}

connector.header.update({"Accept":"application/vnd.adobe.adhoc.dataset.activation+json; version=1"})
activation_res = connector.postData(endpoint=endpoint, data=payload)
activation_res