{"id":138453,"date":"2025-04-11T02:12:56","date_gmt":"2025-04-11T02:12:56","guid":{"rendered":"https:\/\/showbizztoday.com\/index.php\/2025\/04\/11\/foundation-model-for-personalized-recommendation-by-netflix-technology-blog-mar-2025\/"},"modified":"2025-04-11T02:12:57","modified_gmt":"2025-04-11T02:12:57","slug":"foundation-model-for-personalized-recommendation-by-netflix-technology-blog-mar-2025","status":"publish","type":"post","link":"https:\/\/showbizztoday.com\/index.php\/2025\/04\/11\/foundation-model-for-personalized-recommendation-by-netflix-technology-blog-mar-2025\/","title":{"rendered":"Foundation Model for Personalized Recommendation | by Netflix Technology Blog | Mar, 2025"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<div>\n<div>\n<div class=\"speechify-ignore ab cr\">\n<div class=\"speechify-ignore bh l\">\n<div class=\"jp jq jr js jt ab\">\n<div>\n<div class=\"ab ju\">\n<div>\n<div class=\"bm\" aria-hidden=\"false\"><a href=\"https:\/\/netflixtechblog.medium.com\/?source=post_page---byline--1a0bd8e02d39---------------------------------------\" rel=\"noopener follow\" target=\"_blank\"><\/p>\n<div class=\"l jv jw by jx jy\">\n<div class=\"l fm\"><img decoding=\"async\" alt=\"Netflix Technology Blog\" class=\"l ff by df dg cz\" src=\"https:\/\/miro.medium.com\/v2\/resize:fill:88:88\/1*BJWRqfSMf9Da9vsXG9EBRQ.jpeg\" width=\"44\" height=\"44\" loading=\"lazy\" data-testid=\"authorPhoto\"\/><\/div>\n<\/div>\n<p><\/a><\/div>\n<\/div>\n<div class=\"kb ab fm\">\n<div>\n<div class=\"bm\" aria-hidden=\"false\"><a href=\"https:\/\/netflixtechblog.com\/?source=post_page---byline--1a0bd8e02d39---------------------------------------\" rel=\"noopener  ugc nofollow\" target=\"_blank\"><\/p>\n<div class=\"l kc kd by jx ke\">\n<div class=\"l fm\"><img decoding=\"async\" alt=\"Netflix TechBlog\" class=\"l ff by br kf cz\" src=\"https:\/\/miro.medium.com\/v2\/resize:fill:48:48\/1*ty4NvNrGg4ReETxqU2N3Og.png\" width=\"24\" height=\"24\" loading=\"lazy\" data-testid=\"publicationPhoto\"\/><\/div>\n<\/div>\n<p><\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"bn bh l\">\n<div class=\"l cb\"><span class=\"bf b bg z dw\"><\/p>\n<div class=\"ab cp kj kk kl\"><span class=\"bf b bg z dw\"><\/p>\n<div class=\"ab ae\"><span data-testid=\"storyReadTime\">11 min learn<\/span><\/p>\n<p><span class=\"l\" aria-hidden=\"true\"><span class=\"bf b bg z dw\">\u00b7<\/span><\/span><\/p>\n<p><span data-testid=\"storyPublishDate\">Mar 21, 2025<\/span><\/div>\n<p><\/span><\/div>\n<p><\/span><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p id=\"af55\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">By <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/markhsiao\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Ko-Jen Hsiao<\/a>, <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/yesufeng\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Yesu Feng<\/a> and <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/sudarshanlamkhede\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Sudarshan Lamkhede<\/a><\/p>\n<p id=\"4d49\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">Netflix\u2019s personalised recommender system is a posh system, boasting quite a lot of specialised machine realized fashions every catering to distinct wants together with \u201cContinue Watching\u201d and \u201cToday\u2019s Top Picks for You.\u201d (Refer to our latest <a class=\"ag hc\" href=\"https:\/\/videorecsys.com\/slides\/mark_talk3.pdf\" rel=\"noopener ugc nofollow\" target=\"_blank\">overview<\/a> for extra particulars). However, as we expanded our set of personalization algorithms to satisfy growing enterprise wants, upkeep of the recommender system turned fairly pricey. Furthermore, it was tough to switch improvements from one mannequin to a different, given that almost all are independently skilled regardless of utilizing widespread knowledge sources. This situation underscored the necessity for a brand new recommender system structure the place member desire studying is centralized, enhancing accessibility and utility throughout completely different fashions.<\/p>\n<p id=\"b0d8\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">Particularly, these fashions predominantly extract options from members\u2019 latest interplay histories on the platform. Yet, many are confined to a short temporal window because of constraints in serving latency or coaching prices. This limitation has impressed us to develop a basis mannequin for suggestion. This mannequin goals to assimilate data each from members\u2019 complete interplay histories and our content material at a really giant scale. It facilitates the distribution of those learnings to different fashions, both via shared mannequin weights for nice tuning or immediately via embeddings.<\/p>\n<p id=\"34e6\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">The impetus for setting up a foundational suggestion mannequin relies on the paradigm shift in pure language processing (NLP) to giant language fashions (LLMs). In NLP, the development is shifting away from quite a few small, specialised fashions in the direction of a single, giant language mannequin that may carry out quite a lot of duties both immediately or with minimal fine-tuning. Key insights from this shift embody:<\/p>\n<ol class=\"\">\n<li id=\"c797\" class=\"od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox pz qa qb bk\"><strong class=\"of ip\">A Data-Centric Approach<\/strong>: Shifting focus from model-centric methods, which closely depend on function engineering, to a data-centric one. This method prioritizes the buildup of large-scale, high-quality knowledge and, the place possible, goals for end-to-end studying.<\/li>\n<li id=\"00ed\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\"><strong class=\"of ip\">Leveraging Semi-Supervised Learning<\/strong>: The next-token prediction goal in LLMs has confirmed remarkably efficient. It allows large-scale semi-supervised studying utilizing unlabeled knowledge whereas additionally equipping the mannequin with a surprisingly deep understanding of world information.<\/li>\n<\/ol>\n<p id=\"215e\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">These insights have formed the design of our basis mannequin, enabling a transition from sustaining quite a few small, specialised fashions to constructing a scalable, environment friendly system. By scaling up semi-supervised coaching knowledge and mannequin parameters, we intention to develop a mannequin that not solely meets present wants but in addition adapts dynamically to evolving calls for, making certain sustainable innovation and useful resource effectivity.<\/p>\n<p id=\"9678\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">At Netflix, person engagement spans a large spectrum, from informal shopping to dedicated film watching. With over 300 million customers on the finish of 2024, this interprets into lots of of billions of interactions \u2014 an immense dataset comparable in scale to the token quantity of huge language fashions (LLMs). However, as in LLMs, the standard of information usually outweighs its sheer quantity. To harness this knowledge successfully, we make use of a technique of interplay tokenization, making certain significant occasions are recognized and redundancies are minimized.<\/p>\n<p id=\"6600\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\"><strong class=\"of ip\">Tokenizing User Interactions<\/strong>: Not all uncooked person actions contribute equally to understanding preferences. Tokenization helps outline what constitutes a significant \u201ctoken\u201d in a sequence. Drawing an analogy to Byte Pair Encoding (BPE) in NLP, we are able to consider tokenization as merging adjoining actions to type new, higher-level tokens. However, in contrast to language tokenization, creating these new tokens requires cautious consideration of what data to retain. For occasion, the full watch length would possibly should be summed or engagement varieties aggregated to protect essential particulars.<\/p>\n<figure class=\"qk ql qm qn qo qp qh qi paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"qq qr fm qs bh qt\">\n<div class=\"qh qi qj\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*1dhdoLxKnf_fcZOq 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*1dhdoLxKnf_fcZOq 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*1dhdoLxKnf_fcZOq 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*1dhdoLxKnf_fcZOq 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*1dhdoLxKnf_fcZOq 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*1dhdoLxKnf_fcZOq 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*1dhdoLxKnf_fcZOq 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*1dhdoLxKnf_fcZOq 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*1dhdoLxKnf_fcZOq 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*1dhdoLxKnf_fcZOq 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*1dhdoLxKnf_fcZOq 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*1dhdoLxKnf_fcZOq 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*1dhdoLxKnf_fcZOq 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*1dhdoLxKnf_fcZOq 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh fx qu c\" width=\"700\" height=\"281\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div><figcaption class=\"qv fh qw qh qi qx qy bf b bg z dw\"><strong class=\"bf pa\">Figure 1.<\/strong>Tokenization of person interplay historical past by merging actions on the identical title, preserving essential data.<\/figcaption><\/figure>\n<p id=\"eb1f\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">This tradeoff between granular knowledge and sequence compression is akin to the steadiness in LLMs between vocabulary dimension and context window. In our case, the objective is to steadiness the size of interplay historical past in opposition to the extent of element retained in particular person tokens. Overly lossy tokenization dangers dropping useful indicators, whereas too granular a sequence can exceed sensible limits on processing time and reminiscence.<\/p>\n<p id=\"55a5\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">Even with such methods, interplay histories from lively customers can span 1000&#8217;s of occasions, exceeding the capability of transformer fashions with normal self consideration layers. In suggestion techniques, context home windows throughout inference are sometimes restricted to lots of of occasions \u2014 not because of mannequin functionality however as a result of these providers sometimes require millisecond-level latency. This constraint is extra stringent than what&#8217;s typical in LLM purposes, the place longer inference instances (seconds) are extra tolerable.<\/p>\n<p id=\"c640\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">To deal with this throughout coaching, we implement two key options:<\/p>\n<ol class=\"\">\n<li id=\"0868\" class=\"od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox pz qa qb bk\"><strong class=\"of ip\">Sparse Attention Mechanisms<\/strong>: By leveraging sparse consideration methods resembling low-rank compression, the mannequin can prolong its context window to a number of hundred occasions whereas sustaining computational effectivity. This allows it to course of extra intensive interplay histories and derive richer insights into long-term preferences.<\/li>\n<li id=\"75fa\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\"><a class=\"ag hc\" href=\"https:\/\/arxiv.org\/abs\/2409.14517\" rel=\"noopener ugc nofollow\" target=\"_blank\"><strong class=\"of ip\">Sliding Window Sampling<\/strong><\/a>: During coaching, we pattern overlapping home windows of interactions from the complete sequence. This ensures the mannequin is uncovered to completely different segments of the person\u2019s historical past over a number of epochs, permitting it to study from all the sequence with out requiring an impractically giant context window.<\/li>\n<\/ol>\n<p id=\"00ba\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">At inference time, when multi-step decoding is required, we are able to deploy KV caching to effectively reuse previous computations and preserve low latency.<\/p>\n<p id=\"e2ee\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">These approaches collectively enable us to steadiness the necessity for detailed, long-term interplay modeling with the sensible constraints of mannequin coaching and inference, enhancing each the precision and scalability of our suggestion system.<\/p>\n<p id=\"c410\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\"><strong class=\"of ip\">Information in Each \u2018Token\u2019<\/strong>: While the primary a part of our tokenization course of focuses on structuring sequences of interactions, the subsequent essential step is defining the wealthy data contained inside every token. Unlike LLMs, which usually depend on a single embedding house to symbolize enter tokens, our interplay occasions are full of heterogeneous particulars. These embody attributes of the motion itself (resembling locale, time, length, and machine kind) in addition to details about the content material (resembling merchandise ID and metadata like style and launch nation). Most of those options, particularly categorical ones, are immediately embedded inside the mannequin, embracing an end-to-end studying method. However, sure options require particular consideration. For instance, timestamps want further processing to seize each absolute and relative notions of time, with absolute time being notably essential for understanding time-sensitive behaviors.<\/p>\n<p id=\"5ece\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">To improve prediction accuracy in sequential suggestion techniques, we arrange token options into two classes:<\/p>\n<ol class=\"\">\n<li id=\"a88b\" class=\"od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox pz qa qb bk\"><strong class=\"of ip\">Request-Time Features<\/strong>: These are options out there in the mean time of prediction, resembling log-in time, machine, or location.<\/li>\n<li id=\"5441\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\"><strong class=\"of ip\">Post-Action Features<\/strong>: These are particulars out there after an interplay has occurred, resembling the precise present interacted with or the length of the interplay.<\/li>\n<\/ol>\n<p id=\"5152\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">To predict the subsequent interplay, we mix request-time options from the present step with post-action options from the <a class=\"ag hc\" href=\"https:\/\/ojs.aaai.org\/aimagazine\/index.php\/aimagazine\/article\/view\/18140\" rel=\"noopener ugc nofollow\" target=\"_blank\">earlier step<\/a>. This mixing of contextual and historic data ensures every token within the sequence carries a complete illustration, capturing each the instant context and person conduct patterns over time.<\/p>\n<p id=\"6f3d\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">As beforehand talked about, our default method employs the autoregressive next-token prediction goal, just like GPT. This technique successfully leverages the huge scale of unlabeled person interplay knowledge. The adoption of this goal in suggestion techniques has proven a number of successes [1\u20133]. However, given the distinct variations between language duties and suggestion duties, we&#8217;ve made a number of essential modifications to the target.<\/p>\n<p id=\"91bb\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">Firstly, in the course of the pretraining section of typical LLMs, resembling GPT, each goal token is mostly handled with equal weight. In distinction, in our mannequin, not all person interactions are of equal significance. For occasion, a 5-minute trailer play mustn&#8217;t carry the identical weight as a 2-hour full film watch. A larger problem arises when making an attempt to align long-term person satisfaction with particular interactions and suggestions. To deal with this, we are able to undertake a multi-token prediction goal throughout coaching, the place the mannequin predicts the subsequent <em class=\"qz\">n<\/em> tokens at every step as an alternative of a single token[4]. This method encourages the mannequin to seize longer-term dependencies and keep away from myopic predictions centered solely on instant subsequent occasions.<\/p>\n<p id=\"a9d5\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">Secondly, we are able to use a number of fields in our enter knowledge as auxiliary prediction targets along with predicting the subsequent merchandise ID, which stays the first goal. For instance, we are able to derive genres from the gadgets within the unique sequence and use this style sequence as an auxiliary goal. This method serves a number of functions: it acts as a regularizer to scale back overfitting on noisy merchandise ID predictions, offers further insights into person intentions or long-term style preferences, and, when structured hierarchically, can enhance the accuracy of predicting the goal merchandise ID. By first predicting auxiliary targets, resembling style or unique language, the mannequin successfully narrows down the candidate record, simplifying subsequent merchandise ID prediction.<\/p>\n<p id=\"0a8b\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">In addition to the infrastructure challenges posed by coaching larger fashions with substantial quantities of person interplay knowledge which can be widespread when making an attempt to construct basis fashions, there are a number of distinctive hurdles particular to suggestions to make them viable. One of distinctive challenges is entity cold-starting.<\/p>\n<p id=\"a38b\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">At Netflix, our mission is to entertain the world. New titles are added to the catalog regularly. Therefore the advice basis fashions require a chilly begin functionality, which implies the fashions must estimate members\u2019 preferences for newly launched titles earlier than anybody has engaged with them. To allow this, our basis mannequin coaching framework is constructed with the next two capabilities: Incremental coaching and with the ability to do inference with unseen entities.<\/p>\n<ol class=\"\">\n<li id=\"d462\" class=\"od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox pz qa qb bk\"><strong class=\"of ip\">Incremental coaching <\/strong>: Foundation fashions are skilled on intensive datasets, together with each member\u2019s historical past of performs and actions, making frequent retraining impractical. However, our catalog and member preferences regularly evolve. Unlike giant language fashions, which may be incrementally skilled with steady token vocabularies, our suggestion fashions require new embeddings for brand new titles, necessitating expanded embedding layers and output parts. To deal with this, we warm-start new fashions by reusing parameters from earlier fashions and initializing new parameters for brand new titles. For instance, new title embeddings may be initialized by including slight random noise to current common embeddings or through the use of a weighted mixture of comparable titles\u2019 embeddings based mostly on metadata. This method permits new titles to begin with related embeddings, facilitating sooner fine-tuning. In follow, the initialization technique turns into much less essential when extra member interplay knowledge is used for fine-tuning.<\/li>\n<li id=\"09a9\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\"><strong class=\"of ip\">Dealing with unseen entities <\/strong>: Even with incremental coaching, it\u2019s not all the time assured to study effectively on new entities (ex: newly launched titles). It\u2019s additionally attainable that there might be some new entities that aren&#8217;t included\/seen within the coaching knowledge even when we fine-tune basis fashions on a frequent foundation. Therefore, it\u2019s additionally essential to let basis fashions use metadata data of entities and inputs, not simply member interplay knowledge. Thus, our basis mannequin combines each learnable merchandise id embeddings and learnable embeddings from metadata. The following diagram demonstrates this concept.<\/li>\n<\/ol>\n<figure class=\"qk ql qm qn qo qp qh qi paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"qq qr fm qs bh qt\">\n<div class=\"qh qi ra\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*7qnfUGWgXtVUjhP9 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*7qnfUGWgXtVUjhP9 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*7qnfUGWgXtVUjhP9 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*7qnfUGWgXtVUjhP9 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*7qnfUGWgXtVUjhP9 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*7qnfUGWgXtVUjhP9 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*7qnfUGWgXtVUjhP9 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*7qnfUGWgXtVUjhP9 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*7qnfUGWgXtVUjhP9 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*7qnfUGWgXtVUjhP9 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*7qnfUGWgXtVUjhP9 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*7qnfUGWgXtVUjhP9 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*7qnfUGWgXtVUjhP9 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*7qnfUGWgXtVUjhP9 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh fx qu c\" width=\"700\" height=\"389\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div><figcaption class=\"qv fh qw qh qi qx qy bf b bg z dw\"><strong class=\"bf pa\">Figure 2. <\/strong>Titles are related to numerous metadata, resembling genres, storylines, and tones. Each kind of metadata might be represented by averaging its respective embeddings, that are then concatenated to type the general metadata-based embedding for the title.<\/figcaption><\/figure>\n<p id=\"c1b6\" class=\"pw-post-body-paragraph od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox hp bk\">To create the ultimate title embedding, we mix this metadata-based embedding with a fully-learnable ID-based embedding utilizing a mixing layer. Instead of merely summing these embeddings, we use an consideration mechanism based mostly on the \u201cage\u201d of the entity. This method permits new titles with restricted interplay knowledge to rely extra on metadata, whereas established titles can rely extra on ID-based embeddings. Since titles with comparable metadata can have completely different person engagement, their embeddings ought to replicate these variations. Introducing some randomness throughout coaching encourages the mannequin to study from metadata slightly than relying solely on ID embeddings. This technique ensures that newly-launched or pre-launch titles have cheap embeddings even with no person interplay knowledge.<\/p>\n<p id=\"06ff\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">Our suggestion basis mannequin is designed to grasp long-term member preferences and may be utilized in numerous methods by downstream purposes:<\/p>\n<ol class=\"\">\n<li id=\"eb13\" class=\"od oe io of b og oh oi oj ok ol om on gp oo op oq gs or os ot gv ou ov ow ox pz qa qb bk\"><strong class=\"of ip\">Direct Use as a Predictive Model <\/strong>The mannequin is primarily skilled to foretell the subsequent entity a person will work together with. It contains a number of predictor heads for various duties, resembling forecasting member preferences for numerous genres. These may be immediately utilized to satisfy various enterprise wants..<\/li>\n<li id=\"b2fa\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\"><strong class=\"of ip\">Utilizing embeddings <\/strong>The mannequin generates useful embeddings for members and entities like movies, video games, and genres. These embeddings are calculated in batch jobs and saved to be used in each offline and on-line purposes. They can function options in different fashions or be used for candidate era, resembling retrieving interesting titles for a person. High-quality title embeddings additionally help title-to-title suggestions. However, one essential consideration is that the embedding house has arbitrary, uninterpretable dimensions and is incompatible throughout completely different mannequin coaching runs. This poses challenges for downstream shoppers, who should adapt to every retraining and redeployment, risking bugs because of invalidated assumptions in regards to the embedding construction. To deal with this, we apply an orthogonal low-rank transformation to stabilize the person\/merchandise embedding house, making certain constant that means of embedding dimensions, whilst the bottom basis mannequin is retrained and redeployed.<\/li>\n<li id=\"fb28\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\"><strong class=\"of ip\">Fine-Tuning with Specific Data <\/strong>The mannequin\u2019s adaptability permits for fine-tuning with application-specific knowledge. Users can combine the complete mannequin or subgraphs into their very own fashions, fine-tuning them with much less knowledge and computational energy. This method achieves efficiency corresponding to earlier fashions, regardless of the preliminary basis mannequin requiring vital assets.<\/li>\n<\/ol>\n<p id=\"e598\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">In scaling up our basis mannequin for Netflix suggestions, we draw inspiration from the success of huge language fashions (LLMs). Just as LLMs have demonstrated the ability of scaling in enhancing efficiency, we discover that scaling is essential for enhancing generative suggestion duties. Successful scaling calls for strong analysis, environment friendly coaching algorithms, and substantial computing assets. Evaluation should successfully differentiate mannequin efficiency and determine areas for enchancment. Scaling entails knowledge, mannequin, and context scaling, incorporating person engagement, exterior critiques, multimedia belongings, and high-quality embeddings. Our experiments affirm that the scaling regulation additionally applies to our basis mannequin, with constant enhancements noticed as we enhance knowledge and mannequin dimension.<\/p>\n<figure class=\"qk ql qm qn qo qp qh qi paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"qq qr fm qs bh qt\">\n<div class=\"qh qi rb\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*dEypYqp643q6GcVzn3IIww.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*dEypYqp643q6GcVzn3IIww.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*dEypYqp643q6GcVzn3IIww.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*dEypYqp643q6GcVzn3IIww.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*dEypYqp643q6GcVzn3IIww.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*dEypYqp643q6GcVzn3IIww.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*dEypYqp643q6GcVzn3IIww.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*dEypYqp643q6GcVzn3IIww.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh fx qu c\" width=\"700\" height=\"456\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div><figcaption class=\"qv fh qw qh qi qx qy bf b bg z dw\"><strong class=\"bf pa\">Figure 3. <\/strong>The relationship between mannequin parameter dimension and relative efficiency enchancment. The plot demonstrates the scaling regulation in suggestion modeling, exhibiting a development of elevated efficiency with bigger mannequin sizes. The x-axis is logarithmically scaled to spotlight development throughout completely different magnitudes.<\/figcaption><\/figure>\n<p id=\"baf8\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">In conclusion, our Foundation Model for Personalized Recommendation represents a big step in the direction of making a unified, data-centric system that leverages large-scale knowledge to extend the standard of suggestions for our members. This method borrows insights from Large Language Models (LLMs), notably the ideas of semi-supervised studying and end-to-end coaching, aiming to harness the huge scale of unlabeled person interplay knowledge. Addressing distinctive challenges, like chilly begin and presentation bias, the mannequin additionally acknowledges the distinct variations between language duties and suggestion. The Foundation Model permits numerous downstream purposes, from direct use as a predictive mannequin to generate person and entity embeddings for different purposes, and may be fine-tuned for particular canvases. We see promising outcomes from downstream integrations. This transfer from a number of specialised fashions to a extra complete system marks an thrilling growth within the subject of personalised suggestion techniques.<\/p>\n<p id=\"8a46\" class=\"pw-post-body-paragraph od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox hp bk\">Contributors to this work (title in alphabetical order): <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/aileisun\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Ai-Lei Sun<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/aishafenton\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Aish Fenton<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/annecocos\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Anne Cocos<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/foranuj\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Anuj Shah<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/arashaghevli\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Arash Aghevli<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/baolin-li-659426115\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Baolin Li<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/bowei-yan-0080a326\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Bowei Yan<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/danielzheng256\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Dan Zheng<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/dwliang\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Dawen Liang<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/ding-tong-2812785a\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Ding Tong<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/divya-gadde-3ba01551\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Divya Gadde<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/emma-yanyang-kong-6904b457\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Emma Kong<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/gary-y-62175170\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Gary Yeh<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/inbar-naor-6b973a50\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Inbar Naor<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/jinwangw\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Jin Wang<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/jbasilico\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Justin Basilico<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/kabir-nagrecha\/overlay\/about-this-profile\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Kabir Nagrecha<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/kzielnicki\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Kevin Zielnicki<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/linasbaltrunas\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Linas Baltrunas<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/lingyi-liu-4b866016\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Lingyi Liu<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/lequn-luke-wang-9226b2129\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Luke Wang<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/matan-appelbaum-39472b96\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Matan Appelbaum<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/tuzhucheng\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Michael Tu<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/moumitab\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Moumita Bhattacharya<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/pabloadelgado\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Pablo Delgado<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/qiuling-xu-a445b815a\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Qiuling Xu<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/rakeshkomuravelli\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Rakesh Komuravelli<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/raveeshbhalla\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Raveesh Bhalla<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/rob-story-b21a4912\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Rob Story<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/rogermenezes\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Roger Menezes<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/sejoon-oh\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Sejoon Oh<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/shahrzad-naseri-1b988760\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Shahrzad Naseri<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/swanandjoshi7\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Swanand Joshi<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/trungnguyen324\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Trung Nguyen<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/vito-ostuni-0b576027\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Vito Ostuni <\/a><a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/thomasweiwang\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Wei Wang<\/a> <a class=\"ag hc\" href=\"https:\/\/www.linkedin.com\/in\/zhezhangncsu\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Zhe Zhang<\/a><\/p>\n<ol class=\"\">\n<li id=\"d518\" class=\"od oe io of b og pu oi oj ok pv om on gp pw op oq gs px os ot gv py ov ow ox pz qa qb bk\">C. Okay. Kang and J. McAuley, \u201cSelf-Attentive Sequential Recommendation,\u201d <em class=\"qz\">2018 IEEE International Conference on Data Mining (ICDM)<\/em>, Singapore, 2018, pp. 197\u2013206, doi: 10.1109\/ICDM.2018.00035.<\/li>\n<li id=\"3a76\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\">F. Sun et al., \u201cBERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer,\u201d <em class=\"qz\">Proceedings of the twenty eighth ACM International Conference on Information and Knowledge Management (CIKM \u201819)<\/em>, Beijing, China, 2019, pp. 1441\u20131450, doi: 10.1145\/3357384.3357895.<\/li>\n<li id=\"6b8c\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\">J. Zhai et al., \u201cActions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations,\u201d <em class=\"qz\">arXiv preprint arXiv:2402.17152<\/em>, 2024.<\/li>\n<li id=\"9071\" class=\"od oe io of b og qc oi oj ok qd om on gp qe op oq gs qf os ot gv qg ov ow ox pz qa qb bk\">F. Gloeckle, B. Youbi Idrissi, B. Rozi\u00e8re, D. Lopez-Paz, and G. Synnaeve, \u201cBetter &amp; Faster Large Language Models via Multi-token Prediction,\u201d arXiv preprint arXiv:2404.19737, Apr. 2024.<\/li>\n<\/ol>\n<\/div>\n<p>[ad_2]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] 11 min learn \u00b7 Mar 21, 2025 By Ko-Jen Hsiao, Yesu Feng and Sudarshan Lamkhede Netflix\u2019s personalised recommender system is a posh system, boasting quite a lot of specialised machine realized fashions every catering to distinct wants together with \u201cContinue Watching\u201d and \u201cToday\u2019s Top Picks for You.\u201d (Refer to our latest overview for extra [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":138455,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[955,3687,8874,2744,115,5919,5869,4337],"class_list":{"0":"post-138453","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-netflix","8":"tag-blog","9":"tag-foundation","10":"tag-mar","11":"tag-model","12":"tag-netflix","13":"tag-personalized","14":"tag-recommendation","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/138453","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/comments?post=138453"}],"version-history":[{"count":0,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/138453\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media\/138455"}],"wp:attachment":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media?parent=138453"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/categories?post=138453"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/tags?post=138453"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}