{"id":146804,"date":"2025-07-11T05:22:17","date_gmt":"2025-07-11T05:22:17","guid":{"rendered":"https:\/\/showbizztoday.com\/index.php\/2025\/07\/11\/netflix-tudum-architecture-from-cqrs-with-kafka-to-cqrs-with-raw-hollow-by-netflix-technology-blog-jul-2025\/"},"modified":"2025-07-11T05:22:18","modified_gmt":"2025-07-11T05:22:18","slug":"netflix-tudum-architecture-from-cqrs-with-kafka-to-cqrs-with-raw-hollow-by-netflix-technology-blog-jul-2025","status":"publish","type":"post","link":"https:\/\/showbizztoday.com\/index.php\/2025\/07\/11\/netflix-tudum-architecture-from-cqrs-with-kafka-to-cqrs-with-raw-hollow-by-netflix-technology-blog-jul-2025\/","title":{"rendered":"Netflix Tudum Architecture: from CQRS with Kafka to CQRS with RAW Hollow | by Netflix Technology Blog | Jul, 2025"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<p id=\"6413\" class=\"pw-post-body-paragraph nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op hp bk\">The high-level diagram above focuses on storage &amp; distribution, illustrating how we leveraged Kafka to separate the write and skim databases. The write database would retailer inside web page content material and metadata from our CMS. The learn database would retailer read-optimized web page content material, for instance: CDN picture URLs moderately than inside asset IDs, and film titles, synopses, and actor names as an alternative of placeholders. This content material ingestion pipeline allowed us to regenerate all consumer-facing content material on demand, making use of new construction and knowledge, reminiscent of world navigation or branding adjustments. The Tudum Ingestion Service transformed inside CMS knowledge right into a read-optimized format by making use of web page templates, operating validations, performing knowledge transformations, and producing the person content material parts right into a Kafka subject. The Data Service Consumer, acquired the content material parts from Kafka, saved them in a high-availability database (Cassandra), and acted as an API layer for the Page Construction service and different inside Tudum providers to retrieve content material.<\/p>\n<p id=\"704d\" class=\"pw-post-body-paragraph nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op hp bk\">A key benefit of decoupling learn and write paths is the flexibility to scale them independently. It is a well known architectural strategy to attach each write and skim databases utilizing an occasion pushed structure. As a outcome, content material edits would <strong class=\"nx ip\"><em class=\"qg\">finally<\/em><\/strong> seem on <a class=\"ag hb\" href=\"http:\/\/tudum.com\" rel=\"noopener ugc nofollow\" target=\"_blank\">tudum.com<\/a>.<\/p>\n<p id=\"05f4\" class=\"pw-post-body-paragraph nv nw io nx b ny qb oa ob oc qc oe of go qd oh oi gr qe ok ol gu qf on oo op hp bk\">Did you discover the emphasis on \u201c<strong class=\"nx ip\"><em class=\"qg\">eventually<\/em><\/strong>?\u201d A serious draw back of this structure was the delay between making an edit and observing that edit mirrored on the web site. For occasion, when the workforce publishes an replace, the next steps should happen:<\/p>\n<ol class=\"\">\n<li id=\"d25e\" class=\"nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op rb qi qj bk\">Call the REST endpoint on the third occasion CMS to save lots of the information.<\/li>\n<li id=\"3462\" class=\"nv nw io nx b ny qk oa ob oc ql oe of go qm oh oi gr qn ok ol gu qo on oo op rb qi qj bk\">Wait for the CMS to inform the Tudum Ingestion layer by way of a webhook.<\/li>\n<li id=\"3b1e\" class=\"nv nw io nx b ny qk oa ob oc ql oe of go qm oh oi gr qn ok ol gu qo on oo op rb qi qj bk\">Wait for the Tudum Ingestion layer to question all vital sections by way of API, validate knowledge and property, course of the web page, and produce the modified content material to Kafka.<\/li>\n<li id=\"140d\" class=\"nv nw io nx b ny qk oa ob oc ql oe of go qm oh oi gr qn ok ol gu qo on oo op rb qi qj bk\">Wait for the Data Service Consumer to devour this message from Kafka and retailer it within the database.<\/li>\n<li id=\"d298\" class=\"nv nw io nx b ny qk oa ob oc ql oe of go qm oh oi gr qn ok ol gu qo on oo op rb qi qj bk\">Finally, after some <strong class=\"nx ip\">cache refresh delay<\/strong>, this knowledge would <strong class=\"nx ip\"><em class=\"qg\">finally<\/em><\/strong> turn out to be out there to the Page Construction service. Great!<\/li>\n<\/ol>\n<p id=\"4a60\" class=\"pw-post-body-paragraph nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op hp bk\">By introducing a highly-scalable eventually-consistent structure we had been lacking the flexibility to rapidly render adjustments after writing them \u2014 an essential functionality for inside previews.<\/p>\n<p id=\"6b8e\" class=\"pw-post-body-paragraph nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op hp bk\">In our efficiency profiling, we discovered the supply of delay was our Page Data Service which acted as a facade for an underlying <a class=\"ag hb\" rel=\"noopener ugc nofollow\" href=\"https:\/\/netflixtechblog.com\/introducing-netflixs-key-value-data-abstraction-layer-1ea8a0a11b30\" target=\"_blank\" data-discover=\"true\">Key Value Data Abstraction<\/a> database. Page Data Service utilized a <strong class=\"nx ip\">close to cache<\/strong> to speed up web page constructing and scale back learn latencies from the database.<\/p>\n<p id=\"9814\" class=\"pw-post-body-paragraph nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op hp bk\">This cache was carried out to optimize the N+1 key lookups vital for web page development by having an entire knowledge set in reminiscence. When engineers hear \u201c<em class=\"qg\">slow reads<\/em>,\u201d the fast reply is commonly \u201c<em class=\"qg\">cache<\/em>,\u201d which is strictly what our workforce adopted. The KVDAL close to cache can refresh within the background on each app node. Regardless of which system modifies the information, the cache is up to date with every refresh cycle. If you&#8217;ve got 60 keys and a refresh interval of 60 seconds, the close to cache will replace one key per second. This was problematic for previewing latest modifications, as these adjustments had been solely mirrored with every cache refresh. As Tudum\u2019s content material grew, cache refresh occasions elevated, additional extending the delay.<\/p>\n<p id=\"76e2\" class=\"pw-post-body-paragraph nv nw io nx b ny qb oa ob oc qc oe of go qd oh oi gr qe ok ol gu qf on oo op hp bk\">As this ache level grew, a brand new expertise was being developed that might act as our silver bullet. <a class=\"ag hb\" href=\"https:\/\/hollow.how\/raw-hollow-sigmod.pdf\" rel=\"noopener ugc nofollow\" target=\"_blank\">RAW Hollow<\/a> is an modern in-memory, co-located, compressed object database developed by Netflix, designed to deal with small to medium datasets with assist for robust read-after-write consistency. It addresses the challenges of attaining constant efficiency with low latency and excessive availability in purposes that take care of much less continuously altering datasets. Unlike conventional SQL databases or totally in-memory options, RAW Hollow affords a novel strategy the place the complete dataset is distributed throughout the applying cluster and resides within the reminiscence of every utility course of.<\/p>\n<p id=\"3cf2\" class=\"pw-post-body-paragraph nv nw io nx b ny nz oa ob oc od oe of go og oh oi gr oj ok ol gu om on oo op hp bk\">This design leverages compression strategies to scale datasets as much as 100 million data per entity, guaranteeing extraordinarily low latencies and excessive availability. RAW Hollow offers eventual consistency by default, with the choice for robust consistency on the particular person request degree, permitting customers to stability between excessive availability and knowledge consistency. It simplifies the event of extremely out there and scalable stateful purposes by eliminating the complexities of cache synchronization and exterior dependencies. This makes RAW Hollow a strong resolution for effectively managing datasets in environments like Netflix\u2019s streaming providers, the place excessive efficiency and reliability are paramount.<\/p>\n<p id=\"e493\" class=\"pw-post-body-paragraph nv nw io nx b ny qb oa ob oc qc oe of go qd oh oi gr qe ok ol gu qf on oo op hp bk\">Tudum was an ideal match to battle-test RAW Hollow whereas it was pre-GA internally. Hollow\u2019s high-density close to cache considerably reduces I\/O. Having our main dataset in reminiscence permits Tudum\u2019s varied microservices (web page development, search, personalization) to entry knowledge synchronously in O(1) time, simplifying structure, decreasing code complexity, and growing fault tolerance.<\/p>\n<\/div>\n<p>[ad_2]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] The high-level diagram above focuses on storage &amp; distribution, illustrating how we leveraged Kafka to separate the write and skim databases. The write database would retailer inside web page content material and metadata from our CMS. The learn database would retailer read-optimized web page content material, for instance: CDN picture URLs moderately than inside [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":146806,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[11876,955,12356,12358,5877,12357,115,4388,4337,9031],"class_list":{"0":"post-146804","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-netflix","8":"tag-architecture","9":"tag-blog","10":"tag-cqrs","11":"tag-hollow","12":"tag-jul","13":"tag-kafka","14":"tag-netflix","15":"tag-raw","16":"tag-technology","17":"tag-tudum"},"_links":{"self":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/146804","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/comments?post=146804"}],"version-history":[{"count":0,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/146804\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media\/146806"}],"wp:attachment":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media?parent=146804"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/categories?post=146804"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/tags?post=146804"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}