Datapoint Pipeline Configuration

1. Find the ‘New datapoint pipeline’ button on the Pipeline Page

2. Configure the datapoint pipeline

Parameter name and description Parameter value
1. Name Arbitrary String
2. Source List of configured Source connectors
3. Destination (optional)

Destination to sink out erroneous data.

List of configured Destination connectors
4. Notification rule (optional)

Note: Without a notification rule, alerts will be visible in the platform UI, but not sent as a notification to a notification channel, e.g. Slack

List of configured Notification rules
5. Cron expression

Schedule batching based on Cron expression input

5 index Cron expression
6. Data time feature (optional)

Empty: Order of the records/datapoints will be determined by the time of ingestion of the data into Validio

Filled in: In case of backfilling (i.e. ingesting historical data), timestamps for the records are needed in order for Validio to correctly order the records

List of features with date formats in Source
7. Evaluation delay

Grace period for the cron trigger to await late data

Positive integer
8. Unit

Unit of the evaluation delay parameter

  • Second
  • Minute
  • Hour
  • Day
  • Week

The cron trigger is used to batch data in the datapoint pipeline and determines e.g. how often a datapoint metric will be calculated.

The cron trigger in datapoint pipelines shares the same parameters (cron expression, evaluation delay and unit) as the cron trigger in dataset pipelines. To learn more about the parameters, read here

3. Choose which feature(s) to partition on, if any


Simply check box the features you want to partition on


New to partitioning? Learn more about Partitioning Pipelines and why it is one our most used features