Ready-to-go pattern information pipelines with Dataflow | by Netflix Technology Blog | Dec, 2022

0
190
Ready-to-go pattern information pipelines with Dataflow | by Netflix Technology Blog | Dec, 2022


by Jasmine Omeke, Obi-Ike Nwoke, Olek Gorajek

This submit is for all information practitioners, who’re thinking about studying about bootstrapping, standardization and automation of batch information pipelines at Netflix.

You might bear in mind Dataflow from the submit we wrote final yr titled Data pipeline asset administration with Dataflow. That article was a deep dive into one of many extra technical elements of Dataflow and didn’t correctly introduce this software within the first place. This time we’ll attempt to give justice to the intro after which we’ll deal with one of many very first options Dataflow got here with. That function known as pattern workflows, however earlier than we begin in let’s have a fast have a look at Dataflow usually.

Dataflow

Dataflow

Dataflow is a command line utility constructed to enhance expertise and to streamline the information pipeline growth at Netflix. Check out this excessive stage Dataflow assist command output under:

$ dataflow --help
Usage: dataflow [OPTIONS] COMMAND [ARGS]...

Options:
--docker-image TEXT Url of the docker picture to run in.
--run-in-docker Run dataflow in a docker container.
-v, --verbose Enables verbose mode.
--version Show the model and exit.
--help Show this message and exit.

Commands:
migration Manage schema migration.
mock Generate or validate mock datasets.
undertaking Manage a Dataflow undertaking.
pattern Generate totally useful pattern workflows.

As you possibly can see Dataflow CLI is split into 4 fundamental topic areas (or instructions). The mostly used one is dataflow undertaking, which helps people in managing their information pipeline repositories by means of creation, testing, deployment and few different actions.

The dataflow migration command is a particular function, developed single handedly by Stephen Huenneke, to totally automate the communication and monitoring of a knowledge warehouse desk modifications. Thanks to the Netflix inside lineage system (constructed by Girish Lingappa) Dataflow migration can then allow you to establish downstream utilization of the desk in query. And lastly it will probably allow you to craft a message to all of the house owners of those dependencies. After your migration has began Dataflow can even hold monitor of its progress and allow you to talk with the downstream customers.

Dataflow mock command is one other standalone function. It helps you to create YAML formatted mock information recordsdata based mostly on chosen tables, columns and some rows of information from the Netflix information warehouse. Its fundamental function is to allow simple unit testing of your information pipelines, however it will probably technically be utilized in another conditions as a readable information format for small information units.

All the above instructions are very more likely to be described in separate future weblog posts, however proper now let’s deal with the dataflow pattern command.

Dataflow pattern workflows is a set of templates anybody can use to bootstrap their information pipeline undertaking. And by “sample” we imply “an example”, like meals samples in your native grocery retailer. One of the primary causes this function exists is rather like with meals samples, to present you “a taste” of the manufacturing high quality ETL code that you possibly can encounter contained in the Netflix information ecosystem.

All the code you get with the Dataflow pattern workflows is totally useful, adjusted to your atmosphere and remoted from different pattern workflows that others generated. This pipeline is secure to run the second it reveals up in your listing. It will, not solely, construct a pleasant instance mixture desk and fill it up with actual information, however it’s going to additionally current you with a whole set of advisable parts:

  • clear DDL code,
  • correct desk metadata settings,
  • transformation job (in a language of selection) wrapped in an elective WAP (Write, Audit, Publish) sample,
  • pattern set of information audits for the generated information,
  • and a totally useful unit check to your transformation logic.

And final, however not least, these pattern workflows are being examined repeatedly as a part of the Dataflow code change protocol, so you possibly can ensure that what you get is working. This is one option to construct belief with our inside person base.

Next, let’s take a look on the precise enterprise logic of those pattern workflows.

Business Logic

There are a number of variants of the pattern workflow you may get from Dataflow, however all of them share the identical enterprise logic. This was a acutely aware choice as a way to clearly illustrate the distinction between varied languages through which your ETL could possibly be written in. Obviously not all instruments are made with the identical use case in thoughts, so we’re planning so as to add extra code samples for different (than classical batch ETL) information processing functions, e.g. Machine Learning mannequin constructing and scoring.

The instance enterprise logic we use in our template computes the highest hundred motion pictures/reveals in each nation the place Netflix operates every day. This will not be an precise manufacturing pipeline working at Netflix, as a result of it’s a extremely simplified code however it serves properly the aim of illustrating a batch ETL job with varied transformation phases. Let’s evaluate the transformation steps under.

Step 1: every day, incrementally, sum up all viewing time of all motion pictures and reveals in each nation

WITH STEP_1 AS (
SELECT
title_id
, country_code
, SUM(view_hours) AS view_hours
FROM some_db.source_table
WHERE playback_date = CURRENT_DATE
GROUP BY
title_id
, country_code
)

Step 2: rank all titles from most watched to least in each county

WITH STEP_2 AS (
SELECT
title_id
, country_code
, view_hours
, RANK() OVER (
PARTITION BY country_code
ORDER BY view_hours DESC
) AS title_rank
FROM STEP_1
)

Step 3: filter all titles to the highest 100

WITH STEP_3 AS (
SELECT
title_id
, country_code
, view_hours
, title_rank
FROM STEP_2
WHERE title_rank <= 100
)

Now, utilizing the above easy 3-step transformation we’ll produce information that may be written to the next Iceberg desk:

CREATE TABLE IF NOT EXISTS ${TARGET_DB}.dataflow_sample_results (
title_id INT COMMENT "Title ID of the film or present."
, country_code STRING COMMENT "Country code of the playback session."
, title_rank INT COMMENT "Rank of a given title in a given nation."
, view_hours DOUBLE COMMENT "Total viewing hours of a given title in a given nation."
)
COMMENT
"Example dataset delivered to you by Dataflow. For extra info on this
and different examples please go to the Dataflow documentation web page."
PARTITIONED BY (
date DATE COMMENT "Playback date."
)
STORED AS ICEBERG;

As you possibly can infer from the above desk construction we’re going to load about 19,000 rows into this desk every day. And they’ll look one thing like this:

 sql> SELECT * FROM foo.dataflow_sample_results 
WHERE date = 20220101 and country_code = 'US'
ORDER BY title_rank LIMIT 5;

title_id | country_code | title_rank | view_hours | date
----------+--------------+------------+------------+----------
11111111 | US | 1 | 123 | 20220101
44444444 | US | 2 | 111 | 20220101
33333333 | US | 3 | 98 | 20220101
55555555 | US | 4 | 55 | 20220101
22222222 | US | 5 | 11 | 20220101
(5 rows)

With the enterprise logic out of the best way, we are able to now begin speaking in regards to the parts, or the boiler-plate, of our pattern workflows.

Components

Let’s take a look at the most typical workflow parts that we use at Netflix. These parts might not match into each ETL use case, however are used usually sufficient to be included in each template (or pattern workflow). The workflow creator, in any case, has the ultimate phrase on whether or not they wish to use all of those patterns or hold just some. Either means they’re right here to begin with, able to go, if wanted.

Workflow Definitions

Below you possibly can see a typical file construction of a pattern workflow bundle written in SparkSQL.

.
├── backfill.sch.yaml
├── day by day.sch.yaml
├── fundamental.sch.yaml
├── ddl
│ └── dataflow_sparksql_sample.sql
└── src
├── mocks
│ ├── dataflow_pyspark_sample.yaml
│ └── some_db.source_table.yaml
├── sparksql_write.sql
└── test_sparksql_write.py

Above bolded recordsdata outline a collection of steps (a.okay.a. jobs) their cadence, dependencies, and the sequence through which they need to be executed.

This is a method we are able to tie parts collectively right into a cohesive workflow. In each pattern workflow bundle there are three workflow definition recordsdata that work collectively to supply versatile performance. The pattern workflow code assumes a day by day execution sample, however it is vitally simple to regulate them to run at totally different cadence. For the workflow orchestration we use Netflix homegrown Maestro scheduler.

The fundamental workflow definition file holds the logic of a single run, on this case one day-worth of information. This logic consists of the next elements: DDL code, desk metadata info, information transformation and some audit steps. It’s designed to run for a single date, and meant to be referred to as from the day by day or backfill workflows. This fundamental workflow can be referred to as manually throughout growth with arbitrary run-time parameters to get a really feel for the workflow in motion.

The day by day workflow executes the fundamental one every day for the predefined variety of earlier days. This is typically mandatory for the aim of catching up on some late arriving information. This is the place we outline a set off schedule, notifications schemes, and replace the “high water mark” timestamps on our goal desk.

The backfill workflow executes the fundamental for a specified vary of days. This is helpful for restating information, most frequently due to a metamorphosis logic change, however typically as a response to upstream information updates.

DDL

Often, step one in a knowledge pipeline is to outline the goal desk construction and column metadata through a DDL assertion. We perceive that some people select to have their output schema be an implicit results of the rework code itself, however the specific assertion of the output schema will not be solely helpful for including desk (and column) stage feedback, but in addition serves as one option to validate the rework logic.

.
├── backfill.sch.yaml
├── day by day.sch.yaml
├── fundamental.sch.yaml
├── ddl
│ └── dataflow_sparksql_sample.sql
└── src
├── mocks
│ ├── dataflow_pyspark_sample.yaml
│ └── some_db.source_table.yaml
├── sparksql_write.sql
└── test_sparksql_write.py

Generally, we favor to execute DDL instructions as a part of the workflow itself, as a substitute of working exterior of the schedule, as a result of it simplifies the event course of. See under instance of hooking the desk creation SQL file into the fundamental workflow definition.

      - job:
id: ddl
sort: Spark
spark:
script: $S3{./ddl/dataflow_sparksql_sample.sql}
parameters:
TARGET_DB: ${TARGET_DB}

Metadata

The metadata step offers context on the output desk itself in addition to the information contained inside. Attributes are set through Metacat, which is a Netflix inside metadata administration platform. Below is an instance of plugging that metadata step within the fundamental workflow definition

     - job:
id: metadata
sort: Metadata
metacat:
tables:
- ${CATALOG}/${TARGET_DB}/${TARGET_TABLE}
proprietor: ${username}
tags:
- dataflow
- pattern
lifetime: 123
column_types:
date: pk
country_code: pk
rank: pk

Transformation

The transformation step (or steps) will be executed within the developer’s language of selection. The instance under is utilizing SparkSQL.

.
├── backfill.sch.yaml
├── day by day.sch.yaml
├── fundamental.sch.yaml
├── ddl
│ └── dataflow_sparksql_sample.sql
└── src
├── mocks
│ ├── dataflow_pyspark_sample.yaml
│ └── some_db.source_table.yaml
├── sparksql_write.sql
└── test_sparksql_write.py

Optionally, this step can use the Write-Audit-Publish sample to make sure that information is right earlier than it’s made obtainable to the remainder of the corporate. See instance under:

      - template:
id: wap
sort: wap
tables:
- ${CATALOG}/${DATABASE}/${TABLE}
write_jobs:
- job:
id: write
sort: Spark
spark:
script: $S3{./src/sparksql_write.sql}

Audits

Audit steps will be outlined to confirm information high quality. If a “blocking” audit fails, the job will halt and the write step will not be dedicated, so invalid information is not going to be uncovered to customers. This step is elective and configurable, see a partial instance of an audit from the fundamental workflow under.

         data_auditor:
audits:
- operate: columns_should_not_have_nulls
blocking: true
params:
desk: ${TARGET_TABLE}
columns:
- title_id

High-Water-Mark Timestamp

A profitable write will sometimes be adopted by a metadata name to set the legitimate time (or high-water mark) of a dataset. This permits different processes, consuming our desk, to be notified and begin their processing. See an instance excessive water mark job from the fundamental workflow definition.

      - job:
id: hwm
sort: HWM
metacat:
desk: ${CATALOG}/${TARGET_DB}/${TARGET_TABLE}
hwm_datetime: ${EXECUTION_DATE}
hwm_timezone: ${EXECUTION_TIMEZONE}

Unit Tests

Unit check artifacts are additionally generated as a part of the pattern workflow construction. They consist of information mocks, the precise check code, and a easy execution harness relying on the workflow language. See the bolded file under.

.
├── backfill.sch.yaml
├── day by day.sch.yaml
├── fundamental.sch.yaml
├── ddl
│ └── dataflow_sparksql_sample.sql
└── src
├── mocks
│ ├── dataflow_pyspark_sample.yaml
│ └── some_db.source_table.yaml
├── sparksql_write.sql
└── test_sparksql_write.py

These unit exams are supposed to check one “unit” of information rework in isolation. They will be run throughout growth to rapidly seize code typos and syntax points, or throughout automated testing/deployment part, to guarantee that code modifications haven’t damaged any exams.

We need unit exams to run rapidly in order that we are able to have steady suggestions and quick iterations throughout the growth cycle. Running code in opposition to a manufacturing database will be sluggish, particularly with the overhead required for distributed information processing programs like Apache Spark. Mocks assist you to run exams regionally in opposition to a small pattern of “real” information to validate your transformation code performance.

Languages

Over time, the extraction of information from Netflix’s supply programs has grown to embody a wider vary of end-users, reminiscent of engineers, information scientists, analysts, entrepreneurs, and different stakeholders. Focusing on comfort, Dataflow permits for these differing personas to go about their work seamlessly. Numerous our information customers make use of SparkSQL, pyspark, and Scala. A small however rising contingency of information scientists and analytics engineers use R, backed by the Sparklyr interface or different information processing instruments, like Metaflow.

With an understanding that the information panorama and the applied sciences employed by end-users will not be homogenous, Dataflow creates a malleable path towards. It solidifies totally different recipes or repeatable templates for information extraction. Within this part, we’ll preview just a few strategies, beginning with sparkSQL and python’s method of making information pipelines with dataflow. Then we’ll segue into the Scala and R use circumstances.

To start, after putting in Dataflow, a person can run the next command to grasp the best way to get began.

$ dataflow pattern workflow --help                                                         
Dataflow (0.6.16)

Usage: dataflow pattern workflow [OPTIONS] RECIPE [TARGET_PATH]

Create a pattern workflow based mostly on chosen RECIPE and land it within the
specified TARGET_PATH.

Currently supported workflow RECIPEs are: spark-sql, pyspark,
scala and sparklyr.

If TARGET_PATH:
- if not specified, present listing is assumed
- factors to a listing, will probably be used because the goal location

Options:
--source-path TEXT Source path of the pattern workflows.
--workflow-shortname TEXT Workflow brief identify.
--workflow-id TEXT Workflow ID.
--skip-info Skip the data in regards to the workflow pattern.
--help Show this message and exit.

Once once more, let’s assume we now have a listing referred to as stranger-data through which the person creates workflow templates in all 4 languages that Dataflow affords. To higher illustrate the best way to generate the pattern workflows utilizing Dataflow, let’s have a look at the complete command one would use to create considered one of these workflows, e.g:

$ cd stranger-data
$ dataflow pattern workflow spark-sql ./sparksql-workflow

By repeating the above command for every sort of transformation language we are able to arrive on the following listing construction

.
├── pyspark-workflow
│ ├── fundamental.sch.yaml
│ ├── day by day.sch.yaml
│ ├── backfill.sch.yaml
│ ├── ddl
│ │ └── ...
│ ├── src
│ │ └── ...
│ └── tox.ini
├── scala-workflow
│ ├── construct.gradle
│ └── ...
├── sparklyR-workflow
│ └── ...
└── sparksql-workflow
└── ...

Earlier we talked in regards to the enterprise logic of those pattern workflows and we confirmed the Spark SQL model of that instance information transformation. Now let’s focus on totally different approaches to writing the information in different languages.

PySpark

This partial pySpark code under can have the identical performance because the SparkSQL instance above, however it makes use of Spark dataframes Python interface.

def fundamental(args, spark):

source_table_df = spark.desk(f"{some_db}.{source_table})

viewing_by_title_country = (
source_table_df.choose("title_id", "country_code",
"view_hours")
.filter(col("date") == date)
.filter("title_id IS NOT NULL AND view_hours > 0")
.groupBy("title_id", "country_code")
.agg(F.sum("view_hours").alias("view_hours"))
)

window = Window.partitionBy(
"country_code"
).orderBy(col("view_hours").desc())

ranked_viewing_by_title_country = viewing_by_title_country.withColumn(
"title_rank", rank().over(window)
)

ranked_viewing_by_title_country.filter(
col("title_rank") <= 100
).withColumn(
"date", lit(int(date))
).choose(
"title_id",
"country_code",
"title_rank",
"view_hours",
"date",
).repartition(1).write.byName().insertInto(
target_table, overwrite=True
)

Scala

Scala is one other Dataflow supported recipe that gives the identical enterprise logic in a pattern workflow out of the field.

bundle com.netflix.spark

object ExampleApp {
import spark.implicits._

def learnSourceDesk(sourceDb: String, dataDate: String): DataBody =
spark
.desk(s"$someDb.source_table")
.filter($"playback_start_date" === dataDate)

def viewingByTitleCountry(sourceTableDF: DataBody): DataBody = {
sourceTableDF
.choose($"title_id", $"country_code", $"view_hours")
.filter($"title_id".isNotNull)
.filter($"view_hours" > 0)
.groupBy($"title_id", $"country_code")
.agg(F.sum($"view_hours").as("view_hours"))
}

def addTitleRank(viewingDF: DataBody): DataBody = {
viewingDF.withColumn(
"title_rank", F.rank().over(
Window.partitionBy($"country_code").orderBy($"view_hours".desc)
)
)
}

def writeViewing(viewingDF: DataBody, targetTable: String, dataDate: String): Unit = {
viewingDF
.choose($"title_id", $"country_code", $"title_rank", $"view_hours")
.filter($"title_rank" <= 100)
.repartition(1)
.withColumn("date", F.lit(dataDate.toInt))
.writeTo(targetTable)
.overwritePartitions()
}

def fundamental():
sourceTableDF = learnSourceDesk("some_db", "source_table", 20200101)
viewingDf = viewingByTitleCountry(sourceTableDF)
titleRankedDf = addTitleRank(viewingDF)
writeViewing(titleRankedDf)

R / sparklyR

As Netflix has a rising cohort of R customers, R is the most recent recipe obtainable in Dataflow.

suppressPackageStartupMessages({
library(sparklyr)
library(dplyr)
})

...

fundamental <- operate(args, spark) {
title_df <- tbl(spark, g("{some_db}.{source_table}"))

title_activity_by_country <- title_df |>
filter(title_date == date) |>
filter(!is.null(title_id) & event_count > 0) |>
choose(title_id, country_code, event_type) |>
group_by(title_id, country_code) |>
summarize(event_count = sum(event_type, na.rm = TRUE))

ranked_title_activity_by_country <- title_activity_by_country |>
group_by(country_code) |>
mutate(title_rank = rank(desc(event_count)))

top_25_title_by_country <- ranked_title_activity_by_country |>
ungroup() |>
filter(title_rank <= 25) |>
mutate(date = as.integer(date)) |>
choose(
title_id,
country_code,
title_rank,
event_count,
date
)

top_25_title_by_country |>
sdf_repartition(partitions = 1) |>
spark_insert_table(target_table, mode = "overwrite")
}
fundamental(args = args, spark = spark)
}

LEAVE A REPLY

Please enter your comment!
Please enter your name here