{"id":132928,"date":"2024-08-04T10:53:44","date_gmt":"2024-08-04T10:53:44","guid":{"rendered":"https:\/\/showbizztoday.com\/index.php\/2024\/08\/04\/java-21-virtual-threads-dude-wheres-my-lock-by-netflix-technology-blog-jul-2024\/"},"modified":"2024-08-04T10:53:45","modified_gmt":"2024-08-04T10:53:45","slug":"java-21-virtual-threads-dude-wheres-my-lock-by-netflix-technology-blog-jul-2024","status":"publish","type":"post","link":"https:\/\/showbizztoday.com\/index.php\/2024\/08\/04\/java-21-virtual-threads-dude-wheres-my-lock-by-netflix-technology-blog-jul-2024\/","title":{"rendered":"Java 21 Virtual Threads &#8211; Dude, Where\u2019s My Lock? | by Netflix Technology Blog | Jul, 2024"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<div>\n<h2 id=\"6f13\" class=\"pw-subtitle-paragraph hq gs gt be b hr hs ht hu hv hw hx hy hz ia ib ic id ie if cp dt\">Getting actual with digital threads<\/h2>\n<div>\n<div class=\"speechify-ignore ab co\">\n<div class=\"speechify-ignore bg l\">\n<div class=\"ig ih ii ij ik ab\">\n<div>\n<div class=\"ab il\"><a href=\"https:\/\/netflixtechblog.medium.com\/?source=post_page-----3052540e231d--------------------------------\" rel=\"noopener follow\" target=\"_blank\"><\/p>\n<div>\n<div class=\"bl\" aria-hidden=\"false\">\n<div class=\"l im in bx io ip\">\n<div class=\"l fi\"><img decoding=\"async\" alt=\"Netflix Technology Blog\" class=\"l fc bx dc dd cw\" src=\"https:\/\/miro.medium.com\/v2\/resize:fill:88:88\/1*BJWRqfSMf9Da9vsXG9EBRQ.jpeg\" width=\"44\" height=\"44\" loading=\"lazy\" data-testid=\"authorPhoto\"\/><\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><\/a><a href=\"https:\/\/netflixtechblog.com\/?source=post_page-----3052540e231d--------------------------------\" rel=\"noopener  ugc nofollow\" target=\"_blank\"><\/p>\n<div class=\"is ab fi\">\n<div>\n<div class=\"bl\" aria-hidden=\"false\">\n<div class=\"l it iu bx io iv\">\n<div class=\"l fi\"><img decoding=\"async\" alt=\"Netflix TechBlog\" class=\"l fc bx bq iw cw\" src=\"https:\/\/miro.medium.com\/v2\/resize:fill:48:48\/1*ty4NvNrGg4ReETxqU2N3Og.png\" width=\"24\" height=\"24\" loading=\"lazy\" data-testid=\"publicationPhoto\"\/><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p id=\"5713\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">By <a class=\"af oe\" href=\"https:\/\/www.linkedin.com\/in\/vfilanovsky\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Vadim Filanovsky<\/a>, <a class=\"af oe\" href=\"https:\/\/www.linkedin.com\/in\/mike-huang-a552781\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Mike Huang<\/a>, <a class=\"af oe\" href=\"https:\/\/www.linkedin.com\/in\/danny-thomas-a623413\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Danny Thomas<\/a> and <a class=\"af oe\" href=\"https:\/\/www.linkedin.com\/in\/martinchalupa\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Martin Chalupa<\/a><\/p>\n<p id=\"2e2e\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">Netflix has an intensive historical past of utilizing Java as our major programming language throughout our huge fleet of microservices. As we decide up newer variations of Java, our JVM Ecosystem workforce seeks out new language options that may enhance the ergonomics and efficiency of our techniques. In a <a class=\"af oe\" rel=\"noopener ugc nofollow\" target=\"_blank\" href=\"https:\/\/netflixtechblog.com\/bending-pause-times-to-your-will-with-generational-zgc-256629c9386b\">current article<\/a>, we detailed how our workloads benefited from switching to generational ZGC as our default rubbish collector once we migrated to Java 21. Virtual threads is one other function we&#8217;re excited to undertake as a part of this migration.<\/p>\n<p id=\"3630\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">For these new to digital threads, <a class=\"af oe\" href=\"https:\/\/docs.oracle.com\/en\/java\/javase\/21\/core\/virtual-threads.html\" rel=\"noopener ugc nofollow\" target=\"_blank\">they&#8217;re described<\/a> as \u201clightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications.\u201d Their energy comes from their potential to be suspended and resumed mechanically by way of continuations when blocking operations happen, thus liberating the underlying working system threads to be reused for different operations. Leveraging digital threads can unlock larger efficiency when utilized within the acceptable context.<\/p>\n<p id=\"3d96\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">In this text we talk about one of many peculiar instances that we encountered alongside our path to deploying digital threads on Java 21.<\/p>\n<p id=\"3256\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">Netflix engineers raised a number of unbiased studies of intermittent timeouts and hung situations to the Performance Engineering and JVM Ecosystem groups. Upon nearer examination, we seen a set of widespread traits and signs. In all instances, the apps affected ran on Java 21 with SpringBoot 3 and embedded Tomcat serving site visitors on REST endpoints. The situations that skilled the difficulty merely stopped serving site visitors although the JVM on these situations remained up and working. One clear symptom characterizing the onset of this situation is a persistent improve within the variety of sockets in <code class=\"cw pg ph pi pj b\">closeWait<\/code> state as illustrated by the graph beneath:<\/p>\n<figure class=\"pn po pp pq pr ps pk pl paragraph-image\">\n<div role=\"button\" tabindex=\"0\" class=\"pt pu fi pv bg pw\">\n<div class=\"pk pl pm\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/format:webp\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" type=\"image\/webp\"\/><source data-testid=\"og\" srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*b5oZiN2Ew96GEeZ9oIIhPA.png 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"\/><img alt=\"\" class=\"bg mp px c\" width=\"700\" height=\"365\" loading=\"lazy\" role=\"presentation\"\/><\/picture><\/div>\n<\/div>\n<\/figure>\n<p id=\"4115\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">Sockets remaining in <code class=\"cw pg ph pi pj b\">closeWait<\/code> state point out that the distant peer closed the socket, nevertheless it was by no means closed on the native occasion, presumably as a result of the appliance failed to take action. This can usually point out that the appliance is hanging in an irregular state, through which case software thread dumps might reveal further perception.<\/p>\n<p id=\"b6ef\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">In order to troubleshoot this situation, we first leveraged our <a class=\"af oe\" rel=\"noopener ugc nofollow\" target=\"_blank\" href=\"https:\/\/netflixtechblog.com\/improved-alerting-with-atlas-streaming-eval-e691c60dc61e\">alerts system<\/a> to catch an occasion on this state. Since we periodically accumulate and persist thread dumps for all JVM workloads, we will usually retroactively piece collectively the conduct by inspecting these thread dumps from an occasion. However, we have been stunned to seek out that each one our thread dumps present a wonderfully idle JVM with no clear exercise. Reviewing current adjustments revealed that these impacted companies enabled digital threads, and we knew that digital thread name stacks don&#8217;t present up in <code class=\"cw pg ph pi pj b\">jstack<\/code>-generated thread dumps. To acquire a extra full thread dump containing the state of the digital threads, we used the \u201c<code class=\"cw pg ph pi pj b\">jcmd Thread.dump_to_file<\/code>\u201d command as a substitute. As a last-ditch effort to introspect the state of JVM, we additionally collected a heap dump from the occasion.<\/p>\n<p id=\"165a\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">Thread dumps revealed 1000&#8217;s of \u201cblank\u201d digital threads:<\/p>\n<pre class=\"pn po pp pq pr py pj pz bo qa ba bj\"><span id=\"6d57\" class=\"qb og gt pj b bf qc qd l qe qf\">#119821 \"\" digital<p>#119820 \"\" digital<\/p><p>#119823 \"\" digital<\/p><p>#120847 \"\" digital<\/p><p>#119822 \"\" digital<br\/>...<\/p><\/span><\/pre>\n<p id=\"1d0f\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">These are the VTs (digital threads) for which a thread object is created, however has not began working, and as such, has no stack hint. In reality, there have been roughly the identical variety of clean VTs because the variety of sockets in closeWait state. To make sense of what we have been seeing, we have to first perceive how VTs function.<\/p>\n<p id=\"f352\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">A digital thread shouldn&#8217;t be mapped 1:1 to a devoted OS-level thread. Rather, we will consider it as a process that&#8217;s scheduled to a fork-join thread pool. When a digital thread enters a blocking name, like ready for a <code class=\"cw pg ph pi pj b\">Future<\/code>, it relinquishes the OS thread it occupies and easily stays in reminiscence till it is able to resume. In the meantime, the OS thread will be reassigned to execute different VTs in the identical fork-join pool. This permits us to multiplex plenty of VTs to only a handful of underlying OS threads. In JVM terminology, the underlying OS thread is known as the \u201ccarrier thread\u201d to which a digital thread will be \u201cmounted\u201d whereas it executes and \u201cunmounted\u201d whereas it waits. An ideal in-depth description of digital thread is obtainable in <a class=\"af oe\" href=\"https:\/\/openjdk.org\/jeps\/444\" rel=\"noopener ugc nofollow\" target=\"_blank\">JEP 444<\/a>.<\/p>\n<p id=\"fa4e\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">In the environment, we make the most of a blocking mannequin for Tomcat, which in impact holds a employee thread for the lifespan of a request. By enabling digital threads, Tomcat switches to digital execution. Each incoming request creates a brand new digital thread that&#8217;s merely scheduled as a process on a <a class=\"af oe\" href=\"https:\/\/github.com\/apache\/tomcat\/blob\/10.1.24\/java\/org\/apache\/tomcat\/util\/threads\/VirtualThreadExecutor.java\" rel=\"noopener ugc nofollow\" target=\"_blank\">Virtual Thread Executor<\/a>. We can see Tomcat creates a <code class=\"cw pg ph pi pj b\">VirtualThreadExecutor<\/code> <a class=\"af oe\" href=\"https:\/\/github.com\/apache\/tomcat\/blob\/10.1.24\/java\/org\/apache\/tomcat\/util\/net\/AbstractEndpoint.java#L1070-L1071\" rel=\"noopener ugc nofollow\" target=\"_blank\">right here<\/a>.<\/p>\n<p id=\"d97c\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">Tying this data again to our drawback, the signs correspond to a state when Tomcat retains creating a brand new internet employee VT for every incoming request, however there aren&#8217;t any accessible OS threads to mount them onto.<\/p>\n<p id=\"31ec\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">What occurred to our OS threads and what are they busy with? As <a class=\"af oe\" href=\"https:\/\/docs.oracle.com\/en\/java\/javase\/21\/core\/virtual-threads.html#GUID-04C03FFC-066D-4857-85B9-E5A27A875AF9\" rel=\"noopener ugc nofollow\" target=\"_blank\">described right here<\/a>, a VT will probably be pinned to the underlying OS thread if it performs a blocking operation whereas inside a <code class=\"cw pg ph pi pj b\">synchronized<\/code> block or technique. This is precisely what is going on right here. Here is a related snippet from a thread dump obtained from the caught occasion:<\/p>\n<pre class=\"pn po pp pq pr py pj pz bo qa ba bj\"><span id=\"d7f4\" class=\"qb og gt pj b bf qc qd l qe qf\">#119515 \"\" digital<br\/>java.base\/jdk.inside.misc.Unsafe.park(Native Method)<br\/>java.base\/java.lang.VirtualThread.parkOnCarrierThread(VirtualThread.java:661)<br\/>java.base\/java.lang.VirtualThread.park(VirtualThread.java:593)<br\/>java.base\/java.lang.System$2.parkVirtualThread(System.java:2643)<br\/>java.base\/jdk.inside.misc.VirtualThreads.park(VirtualThreads.java:54)<br\/>java.base\/java.util.concurrent.locks.LockAssist.park(LockAssist.java:219)<br\/>java.base\/java.util.concurrent.locks.AbstractQueuedSynchronizer.purchase(AbstractQueuedSynchronizer.java:754)<br\/>java.base\/java.util.concurrent.locks.AbstractQueuedSynchronizer.purchase(AbstractQueuedSynchronizer.java:990)<br\/>java.base\/java.util.concurrent.locks.ReentrantLock$Sync.lock(ReentrantLock.java:153)<br\/>java.base\/java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:322)<br\/>zipkin2.reporter.inside.CountBoundedQueue.supply(CountBoundedQueue.java:54)<br\/>zipkin2.reporter.inside.AsyncReporter$BoundedAsyncReporter.report(AsyncReporter.java:230)<br\/>zipkin2.reporter.courageous.AsyncZipkinSpanHandler.finish(AsyncZipkinSpanHandler.java:214)<br\/>courageous.inside.handler.NoopAwareSpanHandler$CompositeSpanHandler.finish(NoopAwareSpanHandler.java:98)<br\/>courageous.inside.handler.NoopAwareSpanHandler.finish(NoopAwareSpanHandler.java:48)<br\/>courageous.inside.recorder.PendingSpans.end(PendingSpans.java:116)<br\/>courageous.RealSpan.end(RealSpan.java:134)<br\/>courageous.RealSpan.end(RealSpan.java:129)<br\/>io.micrometer.tracing.courageous.bridge.BraveSpan.finish(BraveSpan.java:117)<br\/>io.micrometer.tracing.annotation.SummaryMethodInvocationProcessor.after(SummaryMethodInvocationProcessor.java:67)<br\/>io.micrometer.tracing.annotation.CrucialMethodInvocationProcessor.proceedUnderSynchronousSpan(CrucialMethodInvocationProcessor.java:98)<br\/>io.micrometer.tracing.annotation.CrucialMethodInvocationProcessor.course of(CrucialMethodInvocationProcessor.java:73)<br\/>io.micrometer.tracing.annotation.SpanAspect.newSpanMethod(SpanAspect.java:59)<br\/>java.base\/jdk.inside.mirror.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)<br\/>java.base\/java.lang.mirror.Method.invoke(Method.java:580)<br\/>org.springframework.aop.aspectj.SummaryAspectJAdvice.invokeAdviceMethodWithGivenArgs(SummaryAspectJAdvice.java:637)<br\/>...<\/span><\/pre>\n<p id=\"627b\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">In this stack hint, we enter the synchronization in <code class=\"cw pg ph pi pj b\">courageous.RealSpan.end(<a class=\"af oe\" href=\"https:\/\/github.com\/openzipkin\/brave\/blob\/6.0.3\/brave\/src\/main\/java\/brave\/RealSpan.java#L134\" rel=\"noopener ugc nofollow\" target=\"_blank\">RealSpan.java:134<\/a>)<\/code>. This digital thread is successfully pinned \u2014 it&#8217;s mounted to an precise OS thread even whereas it waits to accumulate a reentrant lock. There are 3 VTs on this precise state and one other VT recognized as \u201c<code class=\"cw pg ph pi pj b\">&lt;redacted&gt; @DefaultExecutor - 46542<\/code>\u201d that additionally follows the identical code path. These 4 digital threads are pinned whereas ready to accumulate a lock. Because the app is deployed on an occasion with 4 vCPUs, <a class=\"af oe\" href=\"https:\/\/github.com\/openjdk\/jdk21u\/blob\/jdk-21.0.3-ga\/src\/java.base\/share\/classes\/java\/lang\/VirtualThread.java#L1102-L1134\" rel=\"noopener ugc nofollow\" target=\"_blank\">the fork-join pool that underpins VT execution<\/a> additionally comprises 4 OS threads. Now that we have now exhausted all of them, no different digital thread could make any progress. This explains why Tomcat stopped processing the requests and why the variety of sockets in <code class=\"cw pg ph pi pj b\">closeWait<\/code> state retains climbing. Indeed, Tomcat accepts a connection on a socket, creates a request together with a digital thread, and passes this request\/thread to the executor for processing. However, the newly created VT can&#8217;t be scheduled as a result of the entire OS threads within the fork-join pool are pinned and by no means launched. So these newly created VTs are caught within the queue, whereas nonetheless holding the socket.<\/p>\n<p id=\"f2d5\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">Now that we all know VTs are ready to accumulate a lock, the subsequent query is: Who holds the lock? Answering this query is vital to understanding what triggered this situation within the first place. Usually a thread dump signifies who holds the lock with both \u201c<code class=\"cw pg ph pi pj b\">- locked &lt;0x\u2026&gt; (at \u2026)<\/code>\u201d or \u201c<code class=\"cw pg ph pi pj b\">Locked ownable synchronizers<\/code>,\u201d however neither of those present up in our thread dumps. As a matter of reality, no locking\/parking\/ready data is included within the <code class=\"cw pg ph pi pj b\">jcmd<\/code>-generated thread dumps. This is a limitation in Java 21 and will probably be addressed sooner or later releases. Carefully combing by the thread dump reveals that there are a complete of 6 threads contending for a similar <code class=\"cw pg ph pi pj b\">ReentrantLock<\/code> and related <code class=\"cw pg ph pi pj b\">Condition<\/code>. Four of those six threads are detailed within the earlier part. Here is one other thread:<\/p>\n<pre class=\"pn po pp pq pr py pj pz bo qa ba bj\"><span id=\"55b1\" class=\"qb og gt pj b bf qc qd l qe qf\">#119516 \"\" digital<br\/>java.base\/java.lang.VirtualThread.park(VirtualThread.java:582)<br\/>java.base\/java.lang.System$2.parkVirtualThread(System.java:2643)<br\/>java.base\/jdk.inside.misc.VirtualThreads.park(VirtualThreads.java:54)<br\/>java.base\/java.util.concurrent.locks.LockAssist.park(LockAssist.java:219)<br\/>java.base\/java.util.concurrent.locks.AbstractQueuedSynchronizer.purchase(AbstractQueuedSynchronizer.java:754)<br\/>java.base\/java.util.concurrent.locks.AbstractQueuedSynchronizer.purchase(AbstractQueuedSynchronizer.java:990)<br\/>java.base\/java.util.concurrent.locks.ReentrantLock$Sync.lock(ReentrantLock.java:153)<br\/>java.base\/java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:322)<br\/>zipkin2.reporter.inside.CountBoundedQueue.supply(CountBoundedQueue.java:54)<br\/>zipkin2.reporter.inside.AsyncReporter$BoundedAsyncReporter.report(AsyncReporter.java:230)<br\/>zipkin2.reporter.courageous.AsyncZipkinSpanHandler.finish(AsyncZipkinSpanHandler.java:214)<br\/>courageous.inside.handler.NoopAwareSpanHandler$CompositeSpanHandler.finish(NoopAwareSpanHandler.java:98)<br\/>courageous.inside.handler.NoopAwareSpanHandler.finish(NoopAwareSpanHandler.java:48)<br\/>courageous.inside.recorder.PendingSpans.end(PendingSpans.java:116)<br\/>courageous.RealScopedSpan.end(RealScopedSpan.java:64)<br\/>...<\/span><\/pre>\n<p id=\"fc4e\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">Note that whereas this thread seemingly goes by the identical code path for ending a span, it doesn&#8217;t undergo a <code class=\"cw pg ph pi pj b\">synchronized<\/code> block. Finally right here is the sixth thread:<\/p>\n<pre class=\"pn po pp pq pr py pj pz bo qa ba bj\"><span id=\"1393\" class=\"qb og gt pj b bf qc qd l qe qf\">#107 \"AsyncReporter &lt;redacted&gt;\"<br\/>java.base\/jdk.inside.misc.Unsafe.park(Native Method)<br\/>java.base\/java.util.concurrent.locks.LockAssist.park(LockAssist.java:221)<br\/>java.base\/java.util.concurrent.locks.AbstractQueuedSynchronizer.purchase(AbstractQueuedSynchronizer.java:754)<br\/>java.base\/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1761)<br\/>zipkin2.reporter.inside.CountBoundedQueue.drainTo(CountBoundedQueue.java:81)<br\/>zipkin2.reporter.inside.AsyncReporter$BoundedAsyncReporter.flush(AsyncReporter.java:241)<br\/>zipkin2.reporter.inside.AsyncReporter$Flusher.run(AsyncReporter.java:352)<br\/>java.base\/java.lang.Thread.run(Thread.java:1583)<\/span><\/pre>\n<p id=\"1f0e\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">This is definitely a traditional platform thread, not a digital thread. Paying explicit consideration to the road numbers on this stack hint, it&#8217;s peculiar that the thread appears to be blocked inside the inside <code class=\"cw pg ph pi pj b\">purchase()<\/code> technique <em class=\"qg\">after<\/em> <a class=\"af oe\" href=\"https:\/\/github.com\/openjdk\/jdk21u\/blob\/jdk-21.0.3-ga\/src\/java.base\/share\/classes\/java\/util\/concurrent\/locks\/AbstractQueuedSynchronizer.java#L1761\" rel=\"noopener ugc nofollow\" target=\"_blank\">finishing the wait<\/a>. In different phrases, this calling thread owned the lock upon coming into <code class=\"cw pg ph pi pj b\">awaitNanos()<\/code>. We know the lock was explicitly acquired <a class=\"af oe\" href=\"https:\/\/github.com\/openzipkin\/zipkin-reporter-java\/blob\/3.4.0\/core\/src\/main\/java\/zipkin2\/reporter\/internal\/CountBoundedQueue.java#L76\" rel=\"noopener ugc nofollow\" target=\"_blank\">right here<\/a>. However, by the point the wait accomplished, it couldn&#8217;t reacquire the lock. Summarizing our thread dump evaluation:<\/p>\n<figure class=\"pn po pp pq pr ps\"\/>\n<p id=\"b6e7\" class=\"pw-post-body-paragraph ni nj gt nk b hr nl nm nn hu no np nq nr ns nt nu nv nw nx ny nz oa ob oc od gm bj\">There are 5 digital threads and 1 common thread ready for the lock. Out of these 5 VTs, 4 of them are pinned to the OS threads within the fork-join pool. There\u2019s nonetheless no data on who owns the lock. As there\u2019s nothing extra we will glean from the thread dump, our subsequent logical step is to peek into the heap dump and introspect the state of the lock.<\/p>\n<p id=\"a6d9\" class=\"pw-post-body-paragraph ni nj gt nk b hr pb nm nn hu pc np nq nr pd nt nu nv pe nx ny nz pf ob oc od gm bj\">Finding the lock within the heap dump was comparatively simple. Using the wonderful <a class=\"af oe\" href=\"https:\/\/eclipse.dev\/mat\/\" rel=\"noopener ugc nofollow\" target=\"_blank\">Eclipse MAT<\/a> instrument, we examined the objects on the stack of the <code class=\"cw pg ph pi pj b\">AsyncReporter<\/code> non-virtual thread to determine the lock object. Reasoning in regards to the present state of the lock was maybe the trickiest a part of our investigation. Most of the related code will be discovered within the <a class=\"af oe\" href=\"https:\/\/github.com\/openjdk\/jdk21u\/blob\/jdk-21.0.3-ga\/src\/java.base\/share\/classes\/java\/util\/concurrent\/locks\/AbstractQueuedSynchronizer.java\" rel=\"noopener ugc nofollow\" target=\"_blank\">AbstractQueuedSynchronizer.java<\/a>. While we don\u2019t declare to completely perceive the inside workings of it, we reverse-engineered sufficient of it to match towards what we see within the heap dump. This diagram illustrates our findings:<\/p>\n<\/div>\n<p>[ad_2]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Getting actual with digital threads By Vadim Filanovsky, Mike Huang, Danny Thomas and Martin Chalupa Netflix has an intensive historical past of utilizing Java as our major programming language throughout our huge fleet of microservices. As we decide up newer variations of Java, our JVM Ecosystem workforce seeks out new language options that may [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":132930,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[955,5874,5871,5877,5876,115,4337,5873,5872,5875],"class_list":{"0":"post-132928","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-netflix","8":"tag-blog","9":"tag-dude","10":"tag-java","11":"tag-jul","12":"tag-lock","13":"tag-netflix","14":"tag-technology","15":"tag-threads","16":"tag-virtual","17":"tag-wheres"},"_links":{"self":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/132928","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/comments?post=132928"}],"version-history":[{"count":0,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/posts\/132928\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media\/132930"}],"wp:attachment":[{"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/media?parent=132928"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/categories?post=132928"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/showbizztoday.com\/index.php\/wp-json\/wp\/v2\/tags?post=132928"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}