Migrating Netflix to GraphQL Safely | by Netflix Technology Blog | Jun, 2023

0
346
Migrating Netflix to GraphQL Safely | by Netflix Technology Blog | Jun, 2023


By Jennifer Shin, Tejas Shikhare, Will Emmanuel

In 2022, a serious change was made to Netflix’s iOS and Android purposes. We migrated Netflix’s cell apps to GraphQL with zero downtime, which concerned a complete overhaul from the shopper to the API layer.

Until just lately, an inside API framework, Falcor, powered our cell apps. They are actually backed by Federated GraphQL, a distributed strategy to APIs the place area groups can independently handle and personal particular sections of the API.

Doing this safely for 100s of thousands and thousands of shoppers with out disruption is exceptionally difficult, particularly contemplating the numerous dimensions of change concerned. This weblog submit will share broadly-applicable strategies (past GraphQL) we used to carry out this migration. The three methods we are going to focus on immediately are AB Testing, Replay Testing, and Sticky Canaries.

Before diving into these strategies, let’s briefly look at the migration plan.

Before GraphQL: Monolithic Falcor API carried out and maintained by the API Team

Before transferring to GraphQL, our API layer consisted of a monolithic server constructed with Falcor. A single API staff maintained each the Java implementation of the Falcor framework and the API Server.

Created a GraphQL Shim Service on prime of our current Monolith Falcor API.

By the summer season of 2020, many UI engineers had been prepared to maneuver to GraphQL. Instead of embarking on a full-fledged migration prime to backside, we created a GraphQL shim on prime of our current Falcor API. The GraphQL shim enabled shopper engineers to maneuver shortly onto GraphQL, determine client-side considerations like cache normalization, experiment with totally different GraphQL shoppers, and examine shopper efficiency with out being blocked by server-side migrations. To launch Phase 1 safely, we used AB Testing.

Deprecate the GraphQL Shim Service and Legacy API Monolith in favor of GraphQL companies owned by the area groups.

We didn’t need the legacy Falcor API to linger without end, so we leaned into Federated GraphQL to energy a single GraphQL API with a number of GraphQL servers.

We might additionally swap out the implementation of a area from GraphQL Shim to Video API with federation directives. To launch Phase 2 safely, we used Replay Testing and Sticky Canaries.

Two key elements decided our testing methods:

  • Functional vs. non-functional necessities
  • Idempotency

If we had been testing useful necessities like information accuracy, and if the request was idempotent, we relied on Replay Testing. We knew we might check the identical question with the identical inputs and constantly count on the identical outcomes.

We couldn’t replay check GraphQL queries or mutations that requested non-idempotent fields.

And we positively couldn’t replay check non-functional necessities like caching and logging consumer interplay. In such circumstances, we weren’t testing for response information however total habits. So, we relied on higher-level metrics-based testing: AB Testing and Sticky Canaries.

Let’s focus on the three testing methods in additional element.

Netflix historically makes use of AB Testing to guage whether or not new product options resonate with prospects. In Phase 1, we leveraged the AB testing framework to isolate a consumer section into two teams totaling 1 million customers. The management group’s site visitors utilized the legacy Falcor stack, whereas the experiment inhabitants leveraged the brand new GraphQL shopper and was directed to the GraphQL Shim. To decide buyer impression, we might examine varied metrics similar to error charges, latencies, and time to render.

We arrange a client-side AB experiment that examined Falcor versus GraphQL and reported coarse-grained high quality of expertise metrics (QoE). The AB experiment outcomes hinted that GraphQL’s correctness was less than par with the legacy system. We spent the following few months diving into these high-level metrics and fixing points similar to cache TTLs, flawed shopper assumptions, and so on.

Wins

High-Level Health Metrics: AB Testing supplied the peace of mind we wanted in our total client-side GraphQL implementation. This helped us efficiently migrate 100% of the site visitors on the cell homepage canvas to GraphQL in 6 months.

Gotchas

Error Diagnosis: With an AB check, we might see coarse-grained metrics which pointed to potential points, however it was difficult to diagnose the precise points.

The subsequent section within the migration was to reimplement our current Falcor API in a GraphQL-first server (Video API Service). The Falcor API had turn out to be a logic-heavy monolith with over a decade of tech debt. So we had to make sure that the reimplemented Video API server was bug-free and similar to the already productized Shim service.

We developed a Replay Testing device to confirm that idempotent APIs had been migrated appropriately from the GraphQL Shim to the Video API service.

The Replay Testing framework leverages the @override directive obtainable in GraphQL Federation. This directive tells the GraphQL Gateway to route to 1 GraphQL server over one other. Take, as an illustration, the next two GraphQL schemas outlined by the Shim Service and the Video Service:

The GraphQL Shim first outlined the certificationRating area (issues like Rated R or PG-13) in Phase 1. In Phase 2, we stood up the VideoService and outlined the identical certificationRating area marked with the @override directive. The presence of the similar area with the @override directive knowledgeable the GraphQL Gateway to route the decision of this area to the brand new Video Service slightly than the outdated Shim Service.

The Replay Tester device samples uncooked site visitors streams from Mantis. With these sampled occasions, the device can seize a stay request from manufacturing and run an similar GraphQL question towards each the GraphQL Shim and the brand new Video API service. The device then compares the outcomes and outputs any variations in response payloads.

Note: We don’t replay check Personally Identifiable Information. It’s used just for non-sensitive product options on the Netflix UI.

Once the check is accomplished, the engineer can view the diffs displayed as a flattened JSON node. You can see the management worth on the left facet of the comma in parentheses and the experiment worth on the proper.

/information/movies/0/tags/3/id: (81496962, null)
/information/movies/0/tags/5/displayName: (Série, worth: “S303251rie”)

We captured two diffs above, the primary had lacking information for an ID area within the experiment, and the second had an encoding distinction. We additionally noticed variations in localization, date precisions, and floating level accuracy. It gave us confidence in replicated enterprise logic, the place subscriber plans and consumer geographic location decided the shopper’s catalog availability.

Wins

  • Confidence in parity between the 2 GraphQL Implementations
  • Enabled tuning configs in circumstances the place information was lacking as a consequence of over-eager timeouts
  • Tested enterprise logic that required many (unknown) inputs and the place correctness will be exhausting to eyeball

Gotchas

  • PII and non-idempotent APIs ought to not be examined utilizing Replay Tests, and it could be beneficial to have a mechanism to forestall that.
  • Manually constructed queries are solely pretty much as good because the options the developer remembers to check. We ended up with untested fields just because we forgot about them.
  • Correctness: The concept of correctness will be complicated too. For instance, is it extra appropriate for an array to be empty or null, or is it simply noise? Ultimately, we matched the present habits as a lot as attainable as a result of verifying the robustness of the shopper’s error dealing with was troublesome.

Despite these shortcomings, Replay Testing was a key indicator that we had achieved useful correctness of most idempotent queries.

While Replay Testing validates the useful correctness of the brand new GraphQL APIs, it doesn’t present any efficiency or enterprise metric perception, such because the total perceived well being of consumer interplay. Are customers clicking play on the similar charges? Are issues loading in time earlier than the consumer loses curiosity? Replay Testing additionally can’t be used for non-idempotent API validation. We reached for a Netflix device known as the Sticky Canary to construct confidence.

A Sticky Canary is an infrastructure experiment the place prospects are assigned both to a canary or baseline host for your complete period of an experiment. All incoming site visitors is allotted to an experimental or baseline host based mostly on their gadget and profile, much like a bucket hash. The experimental host deployment serves all the shoppers assigned to the experiment. Watch our Chaos Engineering discuss from AWS Reinvent to be taught extra about Sticky Canaries.

In the case of our GraphQL APIs, we used a Sticky Canary experiment to run two situations of our GraphQL gateway. The baseline gateway used the present schema, which routes all site visitors to the GraphQL Shim. The experimental gateway used the brand new proposed schema, which routes site visitors to the newest Video API service. Zuul, our major edge gateway, assigns site visitors to both cluster based mostly on the experiment parameters.

We then acquire and analyze the efficiency of the 2 clusters. Some KPIs we monitor intently embody:

  • Median and tail latencies
  • Error charges
  • Logs
  • Resource utilization–CPU, community site visitors, reminiscence, disk
  • Device QoE (Quality of Experience) metrics
  • Streaming well being metrics

We began small, with tiny buyer allocations for hour-long experiments. After validating efficiency, we slowly constructed up scope. We elevated the share of buyer allocations, launched multi-region checks, and finally 12-hour or day-long experiments. Validating alongside the best way is important since Sticky Canaries impression stay manufacturing site visitors and are assigned persistently to a buyer.

After a number of sticky canary experiments, we had assurance that section 2 of the migration improved all core metrics, and we might dial up GraphQL globally with confidence.

Wins

Sticky Canaries was important to construct confidence in our new GraphQL companies.

  • Non-Idempotent APIs: these checks are appropriate with mutating or non-idempotent APIs
  • Business metrics: Sticky Canaries validated our core Netflix enterprise metrics had improved after the migration
  • System efficiency: Insights into latency and useful resource utilization assist us perceive how scaling profiles change after migration

Gotchas

  • Negative Customer Impact: Sticky Canaries can impression actual customers. We wanted confidence in our new companies earlier than persistently routing some prospects to them. This is partially mitigated by real-time impression detection, which is able to mechanically cancel experiments.
  • Short-lived: Sticky Canaries are meant for short-lived experiments. For longer-lived checks, a full-blown AB check needs to be used.

Technology is consistently altering, and we, as engineers, spend a big a part of our careers performing migrations. The query will not be whether or not we’re migrating however whether or not we’re migrating safely, with zero downtime, in a well timed method.

At Netflix, we’ve developed instruments that guarantee confidence in these migrations, focused towards every particular use case being examined. We lined three instruments, AB testing, Replay Testing, and Sticky Canaries that we used for the GraphQL Migration.

This weblog submit is a part of our Migrating Critical Traffic sequence. Also, try: Migrating Critical Traffic at Scale (half 1, half 2) and Ensuring the Successful Launch of Ads.

LEAVE A REPLY

Please enter your comment!
Please enter your name here