{"id":133934,"date":"2024-12-10T17:28:17","date_gmt":"2024-12-10T17:28:17","guid":{"rendered":"https:\/\/showbizztoday.com\/index.php\/2024\/12\/10\/investigation-of-a-workbench-ui-latency-issue-by-netflix-technology-blog-oct-2024\/"},"modified":"2024-12-10T17:28:18","modified_gmt":"2024-12-10T17:28:18","slug":"investigation-of-a-workbench-ui-latency-issue-by-netflix-technology-blog-oct-2024","status":"publish","type":"post","link":"https:\/\/showbizztoday.com\/index.php\/2024\/12\/10\/investigation-of-a-workbench-ui-latency-issue-by-netflix-technology-blog-oct-2024\/","title":{"rendered":"Investigation of a Workbench UI Latency Issue | by Netflix Technology Blog | Oct, 2024"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<div>\n<div>\n<div class=\"speechify-ignore ab cp\">\n<div class=\"speechify-ignore bh l\">\n<div class=\"hv hw hx hy hz ab\">\n<div>\n<div class=\"ab ia\">\n<div>\n<div class=\"bm\" aria-hidden=\"false\"><a href=\"https:\/\/netflixtechblog.medium.com\/?source=post_page---byline--faa017b4653d--------------------------------\" rel=\"noopener follow\" target=\"_blank\"><\/p>\n<div class=\"l ib ic by id ie\">\n<div class=\"l fj\"><img decoding=\"async\" alt=\"Netflix Technology Blog\" class=\"l fd by dd de cx\" src=\"https:\/\/miro.medium.com\/v2\/resize:fill:88:88\/1*BJWRqfSMf9Da9vsXG9EBRQ.jpeg\" width=\"44\" height=\"44\" loading=\"lazy\" data-testid=\"authorPhoto\"\/><\/div>\n<\/div>\n<p><\/a><\/div>\n<\/div>\n<div class=\"ih ab fj\">\n<div>\n<div class=\"bm\" aria-hidden=\"false\"><a href=\"https:\/\/netflixtechblog.com\/?source=post_page---byline--faa017b4653d--------------------------------\" rel=\"noopener  ugc nofollow\" target=\"_blank\"><\/p>\n<div class=\"l ii ij by id ik\">\n<div class=\"l fj\"><img decoding=\"async\" alt=\"Netflix TechBlog\" class=\"l fd by br il cx\" src=\"https:\/\/miro.medium.com\/v2\/resize:fill:48:48\/1*ty4NvNrGg4ReETxqU2N3Og.png\" width=\"24\" height=\"24\" loading=\"lazy\" data-testid=\"publicationPhoto\"\/><\/div>\n<\/div>\n<p><\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"bn bh l\">\n<div class=\"l ix\"><span class=\"bf b bg z du\"><\/p>\n<div class=\"ab cn iy iz ja\"><span class=\"bf b bg z du\"><\/p>\n<div class=\"ab ae\"><span data-testid=\"storyReadTime\">12 min learn<\/span><\/p>\n<p><span class=\"l\" aria-hidden=\"true\"><span class=\"bf b bg z du\">\u00b7<\/span><\/span><\/p>\n<p><span data-testid=\"storyPublishDate\">Oct 14, 2024<\/span><\/div>\n<p><\/span><\/div>\n<p><\/span><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p id=\"c66a\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">By: <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/hechaoli\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Hechao Li<\/a> and <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/mayworm\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Marcelo Mayworm<\/a><\/p>\n<p id=\"2b76\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">With particular due to our gorgeous colleagues <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/amer-ather-9071181\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Amer Ather<\/a>, <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/itaydafna\" rel=\"noopener ugc nofollow\" target=\"_blank\">Itay Dafna<\/a>, <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/lucaepozzi\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Luca Pozzi<\/a>, <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/matheusdeoleao\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Matheus Le\u00e3o<\/a>, and <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/yeji682\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Ye Ji<\/a>.<\/p>\n<p id=\"072a\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">At Netflix, the Analytics and Developer Experience group, a part of the Data Platform, presents a product known as Workbench. Workbench is a distant growth workspace primarily based on<a class=\"af nv\" rel=\"noopener ugc nofollow\" target=\"_blank\" href=\"https:\/\/netflixtechblog.com\/titus-the-netflix-container-management-platform-is-now-open-source-f868c9fb5436\"> Titus<\/a> that permits information practitioners to work with large information and machine studying use instances at scale. A standard use case for Workbench is operating<a class=\"af nv\" href=\"https:\/\/jupyterlab.readthedocs.io\/en\/latest\/\" rel=\"noopener ugc nofollow\" target=\"_blank\"> JupyterLab<\/a> Notebooks.<\/p>\n<p id=\"d5f7\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Recently, a number of customers reported that their JupyterLab UI turns into sluggish and unresponsive when operating sure notebooks. This doc particulars the intriguing strategy of debugging this difficulty, all the best way from the UI right down to the Linux kernel.<\/p>\n<p id=\"6f03\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">Machine Learning engineer <a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/lucaepozzi\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Luca Pozzi<\/a> reported to our Data Platform staff that their <strong class=\"mz gv\">JupyterLab UI on their workbench turns into sluggish and unresponsive when operating a few of their Notebooks.<\/strong> Restarting the <em class=\"oz\">ipykernel<\/em> course of, which runs the Notebook, may quickly alleviate the issue, however the frustration persists as extra notebooks are run.<\/p>\n<p id=\"35ea\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">While we noticed the problem firsthand, the time period \u201cUI being slow\u201d is subjective and tough to measure. To examine this difficulty, <strong class=\"mz gv\">we would have liked a quantitative evaluation of the slowness<\/strong>.<\/p>\n<p id=\"2efa\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\"><a class=\"af nv\" href=\"https:\/\/www.linkedin.com\/in\/itaydafna\" rel=\"noopener ugc nofollow\" target=\"_blank\">Itay Dafna<\/a> devised an efficient and easy technique to quantify the UI slowness. Specifically, we opened a terminal by way of JupyterLab and held down a key (e.g., \u201cj\u201d) for 15 seconds whereas operating the person\u2019s pocket book. The enter to stdin is shipped to the backend (i.e., JupyterLab) by way of a WebSocket, and the output to stdout is shipped again from the backend and displayed on the UI. We then exported the <em class=\"oz\">.har <\/em>file recording all communications from the browser and loaded it right into a Notebook for evaluation.<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb pc\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*ltV3CYtNjLCzolXD 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*ltV3CYtNjLCzolXD 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*ltV3CYtNjLCzolXD 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*ltV3CYtNjLCzolXD 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*ltV3CYtNjLCzolXD 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*ltV3CYtNjLCzolXD 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*ltV3CYtNjLCzolXD 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*ltV3CYtNjLCzolXD 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*ltV3CYtNjLCzolXD 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*ltV3CYtNjLCzolXD 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*ltV3CYtNjLCzolXD 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*ltV3CYtNjLCzolXD 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*ltV3CYtNjLCzolXD 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*ltV3CYtNjLCzolXD 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"252\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"e91b\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Using this method, we noticed latencies starting from 1 to 10 seconds, averaging 7.4 seconds.<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb po\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*H7KW62J0jZKPTjQH 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*H7KW62J0jZKPTjQH 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*H7KW62J0jZKPTjQH 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*H7KW62J0jZKPTjQH 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*H7KW62J0jZKPTjQH 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*H7KW62J0jZKPTjQH 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*H7KW62J0jZKPTjQH 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*H7KW62J0jZKPTjQH 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*H7KW62J0jZKPTjQH 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*H7KW62J0jZKPTjQH 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*H7KW62J0jZKPTjQH 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*H7KW62J0jZKPTjQH 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*H7KW62J0jZKPTjQH 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*H7KW62J0jZKPTjQH 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"176\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"ef5b\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">Now that now we have an goal metric for the slowness, let\u2019s formally begin our investigation. If you may have learn the symptom rigorously, you will need to have seen that the slowness solely happens when the person runs <strong class=\"mz gv\">sure<\/strong> notebooks however not others.<\/p>\n<p id=\"c042\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Therefore, step one is scrutinizing the precise Notebook experiencing the problem. Why does the UI at all times decelerate after operating this explicit Notebook? Naturally, you&#8217;d assume that there should be one thing incorrect with the code operating in it.<\/p>\n<p id=\"cf81\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Upon carefully inspecting the person\u2019s Notebook, we seen a library known as <em class=\"oz\">pystan<\/em> , which offers Python bindings to a local C++ library known as stan, seemed suspicious. Specifically, <em class=\"oz\">pystan<\/em> makes use of <em class=\"oz\">asyncio<\/em>. However, <strong class=\"mz gv\">as a result of there&#8217;s already an present <em class=\"oz\">asyncio<\/em> occasion loop operating within the Notebook course of and <em class=\"oz\">asyncio<\/em> can&#8217;t be nested by design, to ensure that <em class=\"oz\">pystan<\/em> to work, the authors of <em class=\"oz\">pystan<\/em> <\/strong><a class=\"af nv\" href=\"https:\/\/pystan.readthedocs.io\/en\/latest\/faq.html#how-can-i-use-pystan-with-jupyter-notebook-or-jupyterlab\" rel=\"noopener ugc nofollow\" target=\"_blank\"><strong class=\"mz gv\">suggest<\/strong><\/a><strong class=\"mz gv\"> injecting <em class=\"oz\">pystan<\/em> into the present occasion loop through the use of a package deal known as <\/strong><a class=\"af nv\" href=\"https:\/\/pypi.org\/project\/nest-asyncio\/\" rel=\"noopener ugc nofollow\" target=\"_blank\"><strong class=\"mz gv\"><em class=\"oz\">nest_asyncio<\/em><\/strong><\/a>, a library that turned unmaintained as a result of <a class=\"af nv\" href=\"https:\/\/github.com\/erdewit\/ib_insync\/commit\/ef5ea29e44e0c40bbadbc16c2281b3ac58aa4a40\" rel=\"noopener ugc nofollow\" target=\"_blank\">the writer sadly handed away<\/a>.<\/p>\n<p id=\"de21\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Given this seemingly hacky utilization, we naturally suspected that the occasions injected by <em class=\"oz\">pystan<\/em> into the occasion loop have been blocking the dealing with of the WebSocket messages used to speak with the JupyterLab UI. This reasoning sounds very believable. However, <strong class=\"mz gv\">the person claimed that there have been instances when a Notebook not utilizing <em class=\"oz\">pystan<\/em> runs, the UI additionally turned sluggish<\/strong>.<\/p>\n<p id=\"ca77\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Moreover, after a number of rounds of debate with ChatGPT, we discovered extra in regards to the structure and realized that, in principle, <strong class=\"mz gv\">the utilization of <em class=\"oz\">pystan<\/em> and <em class=\"oz\">nest_asyncio<\/em> mustn&#8217;t trigger the slowness in dealing with the UI WebSocket<\/strong> for the next causes:<\/p>\n<p id=\"17ba\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Even although <em class=\"oz\">pystan<\/em> makes use of <em class=\"oz\">nest_asyncio<\/em> to inject itself into the primary occasion loop, <strong class=\"mz gv\">the Notebook runs on a toddler course of (i.e.<\/strong>,<strong class=\"mz gv\"> the <em class=\"oz\">ipykernel<\/em> course of) of the <em class=\"oz\">jupyter-lab<\/em> server course of<\/strong>, which suggests the primary occasion loop being injected by <em class=\"oz\">pystan<\/em> is that of the <em class=\"oz\">ipykernel<\/em> course of, not the <em class=\"oz\">jupyter-server<\/em> course of. Therefore, even when <em class=\"oz\">pystan<\/em> blocks the occasion loop, it shouldn\u2019t impression the <em class=\"oz\">jupyter-lab<\/em> most important occasion loop that&#8217;s used for UI websocket communication. See the diagram under:<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb pp\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*DsQuZV5qnRXp5mVw 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*DsQuZV5qnRXp5mVw 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*DsQuZV5qnRXp5mVw 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*DsQuZV5qnRXp5mVw 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*DsQuZV5qnRXp5mVw 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*DsQuZV5qnRXp5mVw 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*DsQuZV5qnRXp5mVw 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*DsQuZV5qnRXp5mVw 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*DsQuZV5qnRXp5mVw 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*DsQuZV5qnRXp5mVw 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*DsQuZV5qnRXp5mVw 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*DsQuZV5qnRXp5mVw 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*DsQuZV5qnRXp5mVw 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*DsQuZV5qnRXp5mVw 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"591\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"c601\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">In different phrases, <strong class=\"mz gv\"><em class=\"oz\">pystan<\/em> occasions are injected to the occasion loop B on this diagram as an alternative of occasion loop A<\/strong>. So, it shouldn\u2019t block the UI WebSocket occasions.<\/p>\n<p id=\"1d8c\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">You may additionally assume that as a result of occasion loop A handles each the WebSocket occasions from the UI and the ZeroMQ socket occasions from the <em class=\"oz\">ipykernel<\/em> course of, a excessive quantity of ZeroMQ occasions generated by the pocket book might block the WebSocket. However, <strong class=\"mz gv\">after we captured packets on the ZeroMQ socket whereas reproducing the problem, we didn\u2019t observe heavy site visitors on this socket that might trigger such blocking<\/strong>.<\/p>\n<p id=\"f5d9\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">A stronger piece of proof to rule out <em class=\"oz\">pystan<\/em> was that we have been finally in a position to reproduce the problem even with out it, which I\u2019ll dive into later.<\/p>\n<p id=\"87ad\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">The Workbench occasion runs as a <a class=\"af nv\" rel=\"noopener ugc nofollow\" target=\"_blank\" href=\"https:\/\/netflixtechblog.com\/titus-the-netflix-container-management-platform-is-now-open-source-f868c9fb5436\">Titus container<\/a>. To effectively make the most of our compute sources, <strong class=\"mz gv\">Titus employs a CPU oversubscription function<\/strong>, that means the mixed digital CPUs allotted to containers exceed the variety of out there bodily CPUs on a Titus agent. <strong class=\"mz gv\">If a container is unlucky sufficient to be scheduled alongside different \u201cnoisy\u201d containers \u2014 people who devour lots of CPU sources \u2014 it might endure from CPU deficiency.<\/strong><\/p>\n<p id=\"d99f\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">However, after inspecting the CPU utilization of neighboring containers on the identical Titus agent because the Workbench occasion, in addition to the general CPU utilization of the Titus agent, we shortly dominated out this speculation. Using the highest command on the Workbench, we noticed that when operating the Notebook, <strong class=\"mz gv\">the Workbench occasion makes use of solely 4 out of the 64 CPUs allotted to it<\/strong>. Simply put, <strong class=\"mz gv\">this workload just isn&#8217;t CPU-bound.<\/strong><\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb pq\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*YXsntKLiontnkNhf 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*YXsntKLiontnkNhf 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*YXsntKLiontnkNhf 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*YXsntKLiontnkNhf 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*YXsntKLiontnkNhf 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*YXsntKLiontnkNhf 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*YXsntKLiontnkNhf 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*YXsntKLiontnkNhf 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*YXsntKLiontnkNhf 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*YXsntKLiontnkNhf 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*YXsntKLiontnkNhf 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*YXsntKLiontnkNhf 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*YXsntKLiontnkNhf 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*YXsntKLiontnkNhf 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"252\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"8e12\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">The subsequent principle was that the community between the online browser UI (on the laptop computer) and the JupyterLab server was sluggish. To examine, we <strong class=\"mz gv\">captured all of the packets between the laptop computer and the server<\/strong> whereas operating the Notebook and repeatedly urgent \u2018j\u2019 within the terminal.<\/p>\n<p id=\"0018\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">When the UI skilled delays, we noticed a 5-second pause in packet transmission from server port 8888 to the laptop computer. Meanwhile,<strong class=\"mz gv\"> site visitors from different ports, akin to port 22 for SSH, remained unaffected<\/strong>. This led us to conclude that the pause was attributable to the applying operating on port 8888 (i.e., the JupyterLab course of) slightly than the community.<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb pr\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*c660xBwF4XuCA8KN 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*c660xBwF4XuCA8KN 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*c660xBwF4XuCA8KN 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*c660xBwF4XuCA8KN 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*c660xBwF4XuCA8KN 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*c660xBwF4XuCA8KN 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*c660xBwF4XuCA8KN 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*c660xBwF4XuCA8KN 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*c660xBwF4XuCA8KN 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*c660xBwF4XuCA8KN 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*c660xBwF4XuCA8KN 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*c660xBwF4XuCA8KN 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*c660xBwF4XuCA8KN 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*c660xBwF4XuCA8KN 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"115\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"b5d7\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">As beforehand talked about, one other robust piece of proof proving the innocence of pystan was that we might reproduce the problem with out it. By regularly stripping down the \u201cbad\u201d Notebook, we finally arrived at a minimal snippet of code that reproduces the problem with none third-party dependencies or complicated logic:<\/p>\n<pre class=\"pd pe pf pg ph ps pt pu bp pv bb bk\"><span id=\"d392\" class=\"pw nx gu pt b bg px py l pz qa\">import time<br\/>import os<br\/>from multiprocessing import Process<p>N = os.cpu_count()<\/p><p>def launch_worker(worker_id):<br\/>time.sleep(60)<\/p><p>if __name__ == '__main__':<br\/>with open('\/root\/2GB_file', 'r') as file:<br\/>information = file.learn()<br\/>processes = []<br\/>for i in vary(N):<br\/>p = Process(goal=launch_worker, args=(i,))<br\/>processes.append(p)<br\/>p.begin()<\/p><p>for p in processes:<br\/>p.be a part of()<\/p><\/span><\/pre>\n<p id=\"04dc\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">The code does solely two issues:<\/p>\n<ol class=\"\">\n<li id=\"2d46\" class=\"mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu qb qc qd bk\">Read a 2GB file into reminiscence (the Workbench occasion has 480G reminiscence in complete so this reminiscence utilization is sort of negligible).<\/li>\n<li id=\"9c35\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qb qc qd bk\">Start N processes the place N is the variety of CPUs. The N processes do nothing however sleep.<\/li>\n<\/ol>\n<p id=\"9ea8\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">There is little question that that is probably the most foolish piece of code I\u2019ve ever written. It is neither CPU certain nor reminiscence certain. Yet <strong class=\"mz gv\">it may well trigger the JupyterLab UI to stall for as many as 10 seconds!<\/strong><\/p>\n<p id=\"24d9\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">There are a few attention-grabbing observations that elevate a number of questions:<\/p>\n<ul class=\"\">\n<li id=\"bba4\" class=\"mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu qj qc qd bk\">We seen that <strong class=\"mz gv\">each steps are required with a purpose to reproduce the problem<\/strong>. If you don\u2019t learn the 2GB file (that isn&#8217;t even used!), the problem just isn&#8217;t reproducible. <strong class=\"mz gv\">Why utilizing 2GB out of 480GB reminiscence might impression the efficiency?<\/strong><\/li>\n<li id=\"3e91\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qj qc qd bk\"><strong class=\"mz gv\">When the UI delay happens, the <em class=\"oz\">jupyter-lab<\/em> course of CPU utilization spikes to 100%<\/strong>, hinting at competition on the single-threaded occasion loop on this course of (occasion loop A within the diagram earlier than). <strong class=\"mz gv\">What does the <em class=\"oz\">jupyter-lab<\/em> course of want the CPU for, provided that it isn&#8217;t the method that runs the Notebook?<\/strong><\/li>\n<li id=\"72c2\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qj qc qd bk\">The code runs in a Notebook, which suggests it runs within the <em class=\"oz\">ipykernel<\/em> course of, that may be a baby strategy of the <em class=\"oz\">jupyter-lab<\/em> course of. <strong class=\"mz gv\">How can something that occurs in a toddler course of trigger the guardian course of to have CPU competition?<\/strong><\/li>\n<li id=\"101d\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qj qc qd bk\">The workbench has 64CPUs. But after we printed <em class=\"oz\">os.cpu_count()<\/em>, the output was 96. That means <strong class=\"mz gv\">the code begins extra processes than the variety of CPUs<\/strong>. <strong class=\"mz gv\">Why is that?<\/strong><\/li>\n<\/ul>\n<p id=\"9d59\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Let\u2019s reply the final query first. In truth, in the event you run <em class=\"oz\">lscpu<\/em> and <em class=\"oz\">nproc<\/em> instructions inside a Titus container, additionally, you will see totally different outcomes \u2014 the previous provides you 96, which is the variety of bodily CPUs on the Titus agent, whereas the latter provides you 64, which is the variety of digital CPUs allotted to the container. This discrepancy is as a result of lack of a \u201cCPU namespace\u201d within the Linux kernel, inflicting the variety of bodily CPUs to be leaked to the container when calling sure capabilities to get the CPU rely. The assumption right here is that Python <strong class=\"mz gv\"><em class=\"oz\">os.cpu_count()<\/em> makes use of the identical perform because the <em class=\"oz\">lscpu<\/em> command, inflicting it to get the CPU rely of the host as an alternative of the container<\/strong>. Python 3.13 has <a class=\"af nv\" href=\"https:\/\/docs.python.org\/3.13\/library\/os.html#os.process_cpu_count\" rel=\"noopener ugc nofollow\" target=\"_blank\">a brand new name that can be utilized to get the correct CPU rely<\/a>, but it surely\u2019s not GA\u2019ed but.<\/p>\n<p id=\"ea87\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">It will likely be confirmed later that this inaccurate variety of CPUs is usually a contributing issue to the slowness.<\/p>\n<p id=\"b480\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">Next, we used <em class=\"oz\">py-spy<\/em> to do a profiling of the <em class=\"oz\">jupyter-lab<\/em> course of. Note that we profiled the guardian <em class=\"oz\">jupyter-lab <\/em>course of, <strong class=\"mz gv\">not<\/strong> the <em class=\"oz\">ipykernel<\/em> baby course of that runs the copy code. The profiling result&#8217;s as follows:<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb pr\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*ho2C4015Disa8aFv 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*ho2C4015Disa8aFv 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*ho2C4015Disa8aFv 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*ho2C4015Disa8aFv 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*ho2C4015Disa8aFv 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*ho2C4015Disa8aFv 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*ho2C4015Disa8aFv 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*ho2C4015Disa8aFv 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*ho2C4015Disa8aFv 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*ho2C4015Disa8aFv 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*ho2C4015Disa8aFv 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*ho2C4015Disa8aFv 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*ho2C4015Disa8aFv 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*ho2C4015Disa8aFv 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"433\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"55b0\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">As one can see, <strong class=\"mz gv\">lots of CPU time (89%!!) is spent on a perform known as <em class=\"oz\">__parse_smaps_rollup<\/em><\/strong>. In comparability, the terminal handler used solely 0.47% CPU time. From the stack hint, we see that <strong class=\"mz gv\">this perform is contained in the occasion loop A<\/strong>,<strong class=\"mz gv\"> so it may well undoubtedly trigger the UI WebSocket occasions to be delayed<\/strong>.<\/p>\n<p id=\"fd28\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">The stack hint additionally exhibits that this perform is finally known as by a perform utilized by a Jupyter lab extension known as <em class=\"oz\">jupyter_resource_usage<\/em>. <strong class=\"mz gv\">We then disabled this extension and restarted the <em class=\"oz\">jupyter-lab<\/em> course of. As you&#8217;ll have guessed, we might now not reproduce the slowness!<\/strong><\/p>\n<p id=\"5e8f\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">But our puzzle just isn&#8217;t solved but. Why does this extension trigger the UI to decelerate? Let\u2019s preserve digging.<\/p>\n<p id=\"694f\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">From the identify of the extension and the names of the opposite capabilities it calls, we are able to infer that this extension is used to get sources akin to CPU and reminiscence utilization data. Examining the code, we see that this perform name stack is triggered when an API endpoint <em class=\"oz\">\/metrics\/v1<\/em> known as from the UI. <strong class=\"mz gv\">The UI apparently calls this perform periodically<\/strong>, based on the community site visitors tab in Chrome\u2019s Developer Tools.<\/p>\n<p id=\"5465\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Now let\u2019s take a look at the implementation ranging from the decision <em class=\"oz\">get(jupter_resource_usage\/api.py:42)<\/em> . The full code is <a class=\"af nv\" href=\"https:\/\/github.com\/jupyter-server\/jupyter-resource-usage\/blob\/6f15ef91d5c7e50853516b90b5e53b3913d2ed34\/jupyter_resource_usage\/api.py#L28\" rel=\"noopener ugc nofollow\" target=\"_blank\">right here<\/a> and the important thing traces are proven under:<\/p>\n<pre class=\"pd pe pf pg ph ps pt pu bp pv bb bk\"><span id=\"e4f2\" class=\"pw nx gu pt b bg px py l pz qa\">cur_process = psutil.Process()<br\/>all_processes = [cur_process] + cur_process.youngsters(recursive=True)<p>for p in all_processes:<br\/>information = p.memory_full_info()<\/p><\/span><\/pre>\n<p id=\"1f1a\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Basically, it will get all youngsters processes of the <em class=\"oz\">jupyter-lab<\/em> course of recursively, together with each the <em class=\"oz\">ipykernel<\/em> Notebook course of and all processes created by the Notebook. Obviously, <strong class=\"mz gv\">the price of this perform is linear to the variety of all youngsters processes<\/strong>. In the copy code, we create 96 processes. So right here we may have at the very least 96 (sleep processes) + 1 (<em class=\"oz\">ipykernel<\/em> course of) + 1 (<em class=\"oz\">jupyter-lab<\/em> course of) = 98 processes when it ought to really be 64 (allotted CPUs) + 1 (<em class=\"oz\">ipykernel<\/em> course of) + 1 <em class=\"oz\">(jupyter-lab<\/em> course of) = 66 processes, as a result of the variety of CPUs allotted to the container is, in reality, 64.<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb qk\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*sHTjycVMUk1yVAsk 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*sHTjycVMUk1yVAsk 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*sHTjycVMUk1yVAsk 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*sHTjycVMUk1yVAsk 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*sHTjycVMUk1yVAsk 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*sHTjycVMUk1yVAsk 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*sHTjycVMUk1yVAsk 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*sHTjycVMUk1yVAsk 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*sHTjycVMUk1yVAsk 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*sHTjycVMUk1yVAsk 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*sHTjycVMUk1yVAsk 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*sHTjycVMUk1yVAsk 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*sHTjycVMUk1yVAsk 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*sHTjycVMUk1yVAsk 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"326\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"6210\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">This is really ironic. <strong class=\"mz gv\">The extra CPUs now we have, the slower we&#8217;re!<\/strong><\/p>\n<p id=\"98c2\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">At this level, now we have answered one query: <strong class=\"mz gv\">Why does beginning many grandchildren processes within the baby course of trigger the guardian course of to be sluggish? <\/strong>Because the guardian course of runs a perform that\u2019s linear to the quantity all youngsters course of recursively.<\/p>\n<p id=\"1f1f\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">However, this solves solely half of the puzzle. If you bear in mind the earlier evaluation, <strong class=\"mz gv\">beginning many baby processes ALONE doesn\u2019t reproduce the problem<\/strong>. If we don\u2019t learn the 2GB file, even when we create 2x extra processes, we are able to\u2019t reproduce the slowness.<\/p>\n<p id=\"147b\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">So now we should reply the following query: <strong class=\"mz gv\">Why does studying a 2GB file within the baby course of have an effect on the guardian course of efficiency, <\/strong>particularly when the workbench has as a lot as 480GB reminiscence in complete?<\/p>\n<p id=\"49ac\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">To reply this query, let\u2019s look carefully on the perform <em class=\"oz\">__parse_smaps_rollup<\/em>. As the identify implies, <a class=\"af nv\" href=\"https:\/\/github.com\/giampaolo\/psutil\/blob\/c034e6692cf736b5e87d14418a8153bb03f6cf42\/psutil\/_pslinux.py#L1978\" rel=\"noopener ugc nofollow\" target=\"_blank\">this perform<\/a> parses the file <em class=\"oz\">\/proc\/&lt;pid&gt;\/smaps_rollup<\/em>.<\/p>\n<pre class=\"pd pe pf pg ph ps pt pu bp pv bb bk\"><span id=\"67a3\" class=\"pw nx gu pt b bg px py l pz qa\">def _parse_smaps_rollup(self):<br\/>uss = pss = swap = 0<br\/>with open_binary(\"{}\/{}\/smaps_rollup\".format(self._procfs_path, self.pid)) as f:<br\/>for line in f:<br\/>if line.startswith(b\u201dPrivate_\u201d):<br\/># Private_Clean, Private_Dirty, Private_Hugetlb<br\/>s uss += int(line.break up()[1]) * 1024<br\/>elif line.startswith(b\u201dPss:\u201d):<br\/>pss = int(line.break up()[1]) * 1024<br\/>elif line.startswith(b\u201dSwap:\u201d):<br\/>swap = int(line.break up()[1]) * 1024<br\/>return (uss, pss, swap)<\/span><\/pre>\n<p id=\"6952\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Naturally, you may assume that when reminiscence utilization will increase, this file turns into bigger in dimension, inflicting the perform to take longer to parse. Unfortunately, this isn&#8217;t the reply as a result of:<\/p>\n<ul class=\"\">\n<li id=\"2f67\" class=\"mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu qj qc qd bk\">First, <a class=\"af nv\" href=\"https:\/\/www.kernel.org\/doc\/Documentation\/ABI\/testing\/procfs-smaps_rollup\" rel=\"noopener ugc nofollow\" target=\"_blank\"><strong class=\"mz gv\">the variety of traces on this file is fixed<\/strong><\/a><strong class=\"mz gv\"> for all processes<\/strong>.<\/li>\n<li id=\"173a\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qj qc qd bk\">Second, <strong class=\"mz gv\">it is a particular file within the \/proc filesystem, which must be seen as a kernel interface<\/strong> as an alternative of a daily file on disk. In different phrases, <strong class=\"mz gv\">I\/O operations of this file are dealt with by the kernel slightly than disk<\/strong>.<\/li>\n<\/ul>\n<p id=\"5700\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">This file was launched in <a class=\"af nv\" href=\"https:\/\/github.com\/torvalds\/linux\/commit\/493b0e9d945fa9dfe96be93ae41b4ca4b6fdb317#diff-cb79e2d6ea6f9627ff68d1342a219f800e04ff6c6fa7b90c7e66bb391b2dd3ee\" rel=\"noopener ugc nofollow\" target=\"_blank\">this commit<\/a> in 2017, with the aim of enhancing the efficiency of person packages that decide mixture reminiscence statistics. Let\u2019s first give attention to <a class=\"af nv\" href=\"https:\/\/elixir.bootlin.com\/linux\/v6.5.13\/source\/fs\/proc\/task_mmu.c#L1025\" rel=\"noopener ugc nofollow\" target=\"_blank\">the handler of <em class=\"oz\">open<\/em> syscall<\/a> on this <em class=\"oz\">\/proc\/&lt;pid&gt;\/smaps_rollup<\/em>.<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pj pk fj pl bh pm\">\n<div class=\"pa pb ql\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*vGOD79Tleii7X22B 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*vGOD79Tleii7X22B 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*vGOD79Tleii7X22B 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*vGOD79Tleii7X22B 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*vGOD79Tleii7X22B 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*vGOD79Tleii7X22B 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/0*vGOD79Tleii7X22B 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*vGOD79Tleii7X22B 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*vGOD79Tleii7X22B 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*vGOD79Tleii7X22B 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*vGOD79Tleii7X22B 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*vGOD79Tleii7X22B 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*vGOD79Tleii7X22B 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/0*vGOD79Tleii7X22B 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"700\" height=\"579\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"52cf\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Following by way of the <em class=\"oz\">single_open<\/em> <a class=\"af nv\" href=\"https:\/\/elixir.bootlin.com\/linux\/v6.5.13\/source\/fs\/seq_file.c#L582\" rel=\"noopener ugc nofollow\" target=\"_blank\">perform<\/a>, we&#8217;ll discover that it makes use of the perform <em class=\"oz\">show_smaps_rollup<\/em> for the present operation, which might translate to the <em class=\"oz\">learn<\/em> system name on the file. Next, we take a look at the <em class=\"oz\">show_smaps_rollup<\/em> <a class=\"af nv\" href=\"https:\/\/elixir.bootlin.com\/linux\/v6.5.13\/source\/fs\/proc\/task_mmu.c#L916\" rel=\"noopener ugc nofollow\" target=\"_blank\">implementation<\/a>. You will discover <strong class=\"mz gv\">a do-while loop that&#8217;s linear to the digital reminiscence space<\/strong>.<\/p>\n<pre class=\"pd pe pf pg ph ps pt pu bp pv bb bk\"><span id=\"0e83\" class=\"pw nx gu pt b bg px py l pz qa\">static int show_smaps_rollup(struct seq_file *m, void *v) {<br\/>\u2026<br\/>vma_start = vma-&gt;vm_start;<br\/>do {<br\/>smap_gather_stats(vma, &amp;mss, 0);<br\/>last_vma_end = vma-&gt;vm_end;<br\/>\u2026<br\/>} for_each_vma(vmi, vma);<br\/>\u2026<br\/>}<\/span><\/pre>\n<p id=\"976c\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">This completely <strong class=\"mz gv\">explains why the perform will get slower when a 2GB file is learn into reminiscence<\/strong>. <strong class=\"mz gv\">Because the handler of studying the <em class=\"oz\">smaps_rollup<\/em> file now takes longer to run the whereas loop<\/strong>. Basically, though <strong class=\"mz gv\"><em class=\"oz\">smaps_rollup<\/em><\/strong> already improved the efficiency of getting reminiscence data in comparison with the outdated technique of parsing the <em class=\"oz\">\/proc\/&lt;pid&gt;\/smaps<\/em> file, <strong class=\"mz gv\">it&#8217;s nonetheless linear to the digital reminiscence used<\/strong>.<\/p>\n<p id=\"3a6e\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">Even although at this level the puzzle is solved, let\u2019s conduct a extra quantitative evaluation. How a lot is the time distinction when studying the <em class=\"oz\">smaps_rollup<\/em> file with small versus giant digital reminiscence utilization? Let\u2019s write some easy benchmark code like under:<\/p>\n<pre class=\"pd pe pf pg ph ps pt pu bp pv bb bk\"><span id=\"964f\" class=\"pw nx gu pt b bg px py l pz qa\">import os<p>def read_smaps_rollup(pid):<br\/>with open(\"\/proc\/{}\/smaps_rollup\".format(pid), \"rb\") as f:<br\/>for line in f:<br\/>cross<\/p><p>if __name__ == \u201c__main__\u201d:<br\/>pid = os.getpid()<\/p><p>read_smaps_rollup(pid)<\/p><p>with open(\u201c\/root\/2G_file\u201d, \u201crb\u201d) as f:<br\/>information = f.learn()<\/p><p>read_smaps_rollup(pid)<\/p><\/span><\/pre>\n<p id=\"56c3\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">This program performs the next steps:<\/p>\n<ol class=\"\">\n<li id=\"d3b3\" class=\"mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu qb qc qd bk\">Reads the <em class=\"oz\">smaps_rollup<\/em> file of the present course of.<\/li>\n<li id=\"2032\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qb qc qd bk\">Reads a 2GB file into reminiscence.<\/li>\n<li id=\"7966\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qb qc qd bk\">Repeats step 1.<\/li>\n<\/ol>\n<p id=\"12da\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">We then use <em class=\"oz\">strace<\/em> to search out the correct time of studying the <em class=\"oz\">smaps_rollup<\/em> file.<\/p>\n<pre class=\"pd pe pf pg ph ps pt pu bp pv bb bk\"><span id=\"a221\" class=\"pw nx gu pt b bg px py l pz qa\">$ sudo strace -T -e hint=openat,learn python3 benchmark.py 2&gt;&amp;1 | grep \u201csmaps_rollup\u201d -A 1<p>openat(AT_FDCWD, \u201c\/proc\/3107492\/smaps_rollup\u201d, O_RDONLY|O_CLOEXEC) = 3 &lt;0.000023&gt;<br\/>learn(3, \u201c560b42ed4000\u20137ffdadcef000 \u2014 -p 0\u201d\u2026, 1024) = 670 &lt;0.000259&gt;<br\/>...<br\/>openat(AT_FDCWD, \u201c\/proc\/3107492\/smaps_rollup\u201d, O_RDONLY|O_CLOEXEC) = 3 &lt;0.000029&gt;<br\/>learn(3, \u201c560b42ed4000\u20137ffdadcef000 \u2014 -p 0\u201d\u2026, 1024) = 670 &lt;0.027698&gt;<\/p><\/span><\/pre>\n<p id=\"2e29\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">As you possibly can see, each instances, the learn <em class=\"oz\">syscall<\/em> returned 670, that means the file dimension remained the identical at 670 bytes. However, <strong class=\"mz gv\">the time it took the second time (i.e.<\/strong>,<strong class=\"mz gv\"> 0.027698 seconds) is 100x the time it took the primary time (i.e.<\/strong>,<strong class=\"mz gv\"> 0.000259 seconds)<\/strong>! This signifies that if there are 98 processes, the time spent on studying this file alone will likely be 98 * 0.027698 = 2.7 seconds! Such a delay can considerably have an effect on the UI expertise.<\/p>\n<p id=\"9ac7\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">This extension is used to show the CPU and reminiscence utilization of the pocket book course of on the bar on the backside of the Notebook:<\/p>\n<figure class=\"pd pe pf pg ph pi pa pb paragraph-image\">\n<div class=\"pa pb qm\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/0*bNYMYTc5QQAxLyya 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/0*bNYMYTc5QQAxLyya 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/0*bNYMYTc5QQAxLyya 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/0*bNYMYTc5QQAxLyya 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/0*bNYMYTc5QQAxLyya 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/0*bNYMYTc5QQAxLyya 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1048\/format:webp\/0*bNYMYTc5QQAxLyya 1048w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 524px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/0*bNYMYTc5QQAxLyya 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/0*bNYMYTc5QQAxLyya 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/0*bNYMYTc5QQAxLyya 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/0*bNYMYTc5QQAxLyya 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/0*bNYMYTc5QQAxLyya 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/0*bNYMYTc5QQAxLyya 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1048\/0*bNYMYTc5QQAxLyya 1048w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 524px\"\/><img alt=\"\" class=\"bh me pn c\" width=\"524\" height=\"33\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/figure>\n<p id=\"0389\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">We confirmed with the person that disabling the <em class=\"oz\">jupyter-resource-usage<\/em> extension meets their necessities for UI responsiveness, and that this extension just isn&#8217;t essential to their use case. Therefore, we supplied a manner for them to disable the extension.<\/p>\n<p id=\"5cb4\" class=\"pw-post-body-paragraph mx my gu mz b na ou nc nd ne ov ng nh ni ow nk nl nm ox no np nq oy ns nt nu gn bk\">This was such a difficult difficulty that required debugging from the UI all the best way right down to the Linux kernel. It is fascinating that the issue is linear to each the variety of CPUs and the digital reminiscence dimension \u2014 two dimensions which might be typically considered individually.<\/p>\n<p id=\"dde1\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">Overall, we hope you loved the irony of:<\/p>\n<ol class=\"\">\n<li id=\"b7b8\" class=\"mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu qb qc qd bk\">The extension used to observe CPU utilization inflicting CPU competition.<\/li>\n<li id=\"93e2\" class=\"mx my gu mz b na qe nc nd ne qf ng nh ni qg nk nl nm qh no np nq qi ns nt nu qb qc qd bk\">An attention-grabbing case the place the extra CPUs you may have, the slower you get!<\/li>\n<\/ol>\n<p id=\"dfe6\" class=\"pw-post-body-paragraph mx my gu mz b na nb nc nd ne nf ng nh ni nj nk nl nm nn no np nq nr ns nt nu gn bk\">If you\u2019re excited by tackling such technical challenges and have the chance to unravel complicated technical challenges and drive innovation, contemplate becoming a member of our <a class=\"af nv\" href=\"https:\/\/explore.jobs.netflix.net\/careers?query=Data+Platform&amp;pid=790298020581&amp;domain=netflix.com&amp;sort_by=relevance\" rel=\"noopener ugc nofollow\" target=\"_blank\">Data Platform staff<\/a>s. Be a part of shaping the way forward for Data Security and Infrastructure, Data Developer Experience, Analytics Infrastructure and Enablement, and extra. Explore the impression you may make with us!<\/p>\n<\/div>\n<p>[ad_2]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] 12 min learn \u00b7 Oct 14, 2024 By: Hechao Li and Marcelo Mayworm With particular due to our gorgeous colleagues Amer Ather, Itay Dafna, Luca Pozzi, Matheus Le\u00e3o, and Ye Ji. At Netflix, the Analytics and Developer Experience group, a part of the Data Platform, presents a product known as Workbench. Workbench is a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":133936,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[955,2502,5952,6594,115,6595,4337,6593],"class_list":{"0":"post-133934","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-netflix","8":"tag-blog","9":"tag-investigation","10":"tag-issue","11":"tag-latency","12":"tag-netflix","13":"tag-oct","14":"tag-technology","15":"tag-workbench"},"_links":{"self":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/133934","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/comments?post=133934"}],"version-history":[{"count":0,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/133934\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media\/133936"}],"wp:attachment":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media?parent=133934"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/categories?post=133934"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/tags?post=133934"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}