Debugging a FUSE impasse within the Linux kernel | by Netflix Technology Blog | May, 2023

0
120
Debugging a FUSE impasse within the Linux kernel | by Netflix Technology Blog | May, 2023


Tycho Andersen

The Compute staff at Netflix is charged with managing all AWS and containerized workloads at Netflix, together with autoscaling, deployment of containers, situation remediation, and many others. As a part of this staff, I work on fixing unusual issues that customers report.

This specific situation concerned a customized inner FUSE filesystem: ndrive. It had been festering for a while, however wanted somebody to take a seat down and have a look at it in anger. This weblog submit describes how I poked at /procto get a way of what was occurring, earlier than posting the problem to the kernel mailing record and getting schooled on how the kernel’s wait code truly works!

We had a caught docker API name:

goroutine 146 [select, 8817 minutes]:
internet/http.(*persistConn).roundTrip(0xc000658fc0, 0xc0003fc080, 0x0, 0x0, 0x0)
/usr/native/go/src/internet/http/transport.go:2610 +0x765
internet/http.(*Transport).roundTrip(0xc000420140, 0xc000966200, 0x30, 0x1366f20, 0x162)
/usr/native/go/src/internet/http/transport.go:592 +0xacb
internet/http.(*Transport).RoundTrip(0xc000420140, 0xc000966200, 0xc000420140, 0x0, 0x0)
/usr/native/go/src/internet/http/roundtrip.go:17 +0x35
internet/http.ship(0xc000966200, 0x161eba0, 0xc000420140, 0x0, 0x0, 0x0, 0xc00000e050, 0x3, 0x1, 0x0)
/usr/native/go/src/internet/http/consumer.go:251 +0x454
internet/http.(*Client).ship(0xc000438480, 0xc000966200, 0x0, 0x0, 0x0, 0xc00000e050, 0x0, 0x1, 0x10000168e)
/usr/native/go/src/internet/http/consumer.go:175 +0xff
internet/http.(*Client).do(0xc000438480, 0xc000966200, 0x0, 0x0, 0x0)
/usr/native/go/src/internet/http/consumer.go:717 +0x45f
internet/http.(*Client).Do(...)
/usr/native/go/src/internet/http/consumer.go:585
golang.org/x/internet/context/ctxhttp.Do(0x163bd48, 0xc000044090, 0xc000438480, 0xc000966100, 0x0, 0x0, 0x0)
/go/pkg/mod/golang.org/x/internet@v0.0.0-20211209124913-491a49abca63/context/ctxhttp/ctxhttp.go:27 +0x10f
github.com/docker/docker/consumer.(*Client).doRequest(0xc0001a8200, 0x163bd48, 0xc000044090, 0xc000966100, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/moby/moby@v0.0.0-20190408150954-50ebe4562dfc/consumer/request.go:132 +0xbe
github.com/docker/docker/consumer.(*Client).sendRequest(0xc0001a8200, 0x163bd48, 0xc000044090, 0x13d8643, 0x3, 0xc00079a720, 0x51, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/moby/moby@v0.0.0-20190408150954-50ebe4562dfc/consumer/request.go:122 +0x156
github.com/docker/docker/consumer.(*Client).get(...)
/go/pkg/mod/github.com/moby/moby@v0.0.0-20190408150954-50ebe4562dfc/consumer/request.go:37
github.com/docker/docker/consumer.(*Client).ContainerInspect(0xc0001a8200, 0x163bd48, 0xc000044090, 0xc0006a01c0, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/moby/moby@v0.0.0-20190408150954-50ebe4562dfc/consumer/container_inspect.go:18 +0x128
github.com/Netflix/titus-executor/executor/runtime/docker.(*DockerRuntime).Kill(0xc000215180, 0x163bdb8, 0xc000938600, 0x1, 0x0, 0x0)
/var/lib/buildkite-agent/builds/ip-192-168-1-90-1/netflix/titus-executor/executor/runtime/docker/docker.go:2835 +0x310
github.com/Netflix/titus-executor/executor/runner.(*Runner).doShutdown(0xc000432dc0, 0x163bd10, 0xc000938390, 0x1, 0xc000b821e0, 0x1d, 0xc0005e4710)
/var/lib/buildkite-agent/builds/ip-192-168-1-90-1/netflix/titus-executor/executor/runner/runner.go:326 +0x4f4
github.com/Netflix/titus-executor/executor/runner.(*Runner).beginRunner(0xc000432dc0, 0x163bdb8, 0xc00071e0c0, 0xc0a502e28c08b488, 0x24572b8, 0x1df5980)
/var/lib/buildkite-agent/builds/ip-192-168-1-90-1/netflix/titus-executor/executor/runner/runner.go:122 +0x391
created by github.com/Netflix/titus-executor/executor/runner.StartTaskWithRuntime
/var/lib/buildkite-agent/builds/ip-192-168-1-90-1/netflix/titus-executor/executor/runner/runner.go:81 +0x411

Here, our administration engine has made an HTTP name to the Docker API’s unix socket asking it to kill a container. Our containers are configured to be killed by way of SIGKILL. But that is unusual. kill(SIGKILL) needs to be comparatively deadly, so what’s the container doing?

$ docker exec -it 6643cd073492 bash
OCI runtime exec failed: exec failed: container_linux.go:380: beginning container course of brought about: process_linux.go:130: executing setns course of brought about: exit standing 1: unknown

Hmm. Seems prefer it’s alive, however setns(2) fails. Why would that be? If we have a look at the method tree by way of ps awwfux, we see:

_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/6643cd073492ba9166100ed30dbe389ff1caef0dc3d35
| _ [docker-init]
| _ [ndrive] <defunct>

Ok, so the container’s init course of remains to be alive, nevertheless it has one zombie baby. What might the container’s init course of presumably be doing?

# cat /proc/1528591/stack
[<0>] do_wait+0x156/0x2f0
[<0>] kernel_wait4+0x8d/0x140
[<0>] zap_pid_ns_processes+0x104/0x180
[<0>] do_exit+0xa41/0xb80
[<0>] do_group_exit+0x3a/0xa0
[<0>] __x64_sys_exit_group+0x14/0x20
[<0>] do_syscall_64+0x37/0xb0
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae

It is within the means of exiting, nevertheless it appears caught. The solely baby is the ndrive course of in Z (i.e. “zombie”) state, although. Zombies are processes which have efficiently exited, and are ready to be reaped by a corresponding wait() syscall from their dad and mom. So how might the kernel be caught ready on a zombie?

# ls /proc/1544450/process
1544450 1544574

Ah ha, there are two threads within the thread group. One of them is a zombie, possibly the opposite one isn’t:

# cat /proc/1544574/stack
[<0>] request_wait_answer+0x12f/0x210
[<0>] fuse_simple_request+0x109/0x2c0
[<0>] fuse_flush+0x16f/0x1b0
[<0>] filp_close+0x27/0x70
[<0>] put_files_struct+0x6b/0xc0
[<0>] do_exit+0x360/0xb80
[<0>] do_group_exit+0x3a/0xa0
[<0>] get_signal+0x140/0x870
[<0>] arch_do_signal_or_restart+0xae/0x7c0
[<0>] exit_to_user_mode_prepare+0x10f/0x1c0
[<0>] syscall_exit_to_user_mode+0x26/0x40
[<0>] do_syscall_64+0x46/0xb0
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae

Indeed it’s not a zombie. It is making an attempt to grow to be one as onerous as it may well, nevertheless it’s blocking inside FUSE for some cause. To discover out why, let’s have a look at some kernel code. If we have a look at zap_pid_ns_processes(), it does:

/*
* Reap the EXIT_ZOMBIE kids we had earlier than we ignored SIGCHLD.
* kernel_wait4() can even block till our kids traced from the
* mum or dad namespace are indifferent and grow to be EXIT_DEAD.
*/
do {
clear_thread_flag(TIF_SIGPENDING);
rc = kernel_wait4(-1, NULL, __WALL, NULL);
} whereas (rc != -ECHILD);

which is the place we’re caught, however earlier than that, it has achieved:

/* Don't enable any extra processes into the pid namespace */
disable_pid_allocation(pid_ns);

which is why docker can’t setns() — the namespace is a zombie. Ok, so we are able to’t setns(2), however why are we caught in kernel_wait4()? To perceive why, let’s have a look at what the opposite thread was doing in FUSE’s request_wait_answer():

/*
* Either request is already in userspace, or it was compelled.
* Wait it out.
*/
wait_event(req->waitq, test_bit(FR_FINISHED, &req->flags));

Ok, so we’re ready for an occasion (on this case, that userspace has replied to the FUSE flush request). But zap_pid_ns_processes()despatched a SIGKILL! SIGKILL needs to be very deadly to a course of. If we have a look at the method, we are able to certainly see that there’s a pending SIGKILL:

# grep Pnd /proc/1544574/standing
SigPnd: 0000000000000000
ShdPnd: 0000000000000100

Viewing course of standing this fashion, you may see 0x100 (i.e. the ninth bit is ready) underneath ShdPnd, which is the sign quantity comparable to SIGKILL. Pending alerts are alerts which have been generated by the kernel, however haven’t but been delivered to userspace. Signals are solely delivered at sure instances, for instance when getting into or leaving a syscall, or when ready on occasions. If the kernel is at present doing one thing on behalf of the duty, the sign could also be pending. Signals may also be blocked by a process, in order that they’re by no means delivered. Blocked alerts will present up of their respective pending units as properly. However, man 7 sign says: “The signals SIGKILL and SIGSTOP cannot be caught, blocked, or ignored.” But right here the kernel is telling us that we’ve a pending SIGKILL, aka that it’s being ignored even whereas the duty is ready!

Well that’s bizarre. The wait code (i.e. embody/linux/wait.h) is used in all places within the kernel: semaphores, wait queues, completions, and many others. Surely it is aware of to search for SIGKILLs. So what does wait_event() truly do? Digging by way of the macro expansions and wrappers, the meat of it’s:

#outline ___wait_event(wq_head, situation, state, unique, ret, cmd)           
({
__label__ __out;
struct wait_queue_entry __wq_entry;
lengthy __ret = ret; /* express shadow */

init_wait_entry(&__wq_entry, unique ? WQ_FLAG_EXCLUSIVE : 0);
for (;;) {
lengthy __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);

if (situation)
break;

if (___wait_is_interruptible(state) && __int) {
__ret = __int;
goto __out;
}

cmd;
}
finish_wait(&wq_head, &__wq_entry);
__out: __ret;
})

So it loops eternally, doing prepare_to_wait_event(), checking the situation, then checking to see if we have to interrupt. Then it does cmd, which on this case is schedule(), i.e. “do something else for a while”. prepare_to_wait_event() seems to be like:

lengthy prepare_to_wait_event(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state)
{
unsigned lengthy flags;
lengthy ret = 0;

spin_lock_irqsave(&wq_head->lock, flags);
if (signal_pending_state(state, present)) {
/*
* Exclusive waiter should not fail if it was chosen by wakeup,
* it ought to "eat" the situation we have been ready for.
*
* The caller will recheck the situation and return success if
* we have been already woken up, we cannot miss the occasion as a result of
* wakeup locks/unlocks the identical wq_head->lock.
*
* But we have to be sure that set-condition + wakeup after that
* cannot see us, it ought to get up one other unique waiter if
* we fail.
*/
list_del_init(&wq_entry->entry);
ret = -ERESTARTSYS;
} else {
if (list_empty(&wq_entry->entry)) {
if (wq_entry->flags & WQ_FLAG_EXCLUSIVE)
__add_wait_queue_entry_tail(wq_head, wq_entry);
else
__add_wait_queue(wq_head, wq_entry);
}
set_current_state(state);
}
spin_unlock_irqrestore(&wq_head->lock, flags);

return ret;
}
EXPORT_SYMBOL(prepare_to_wait_event);

It seems to be like the one approach we are able to get away of this with a non-zero exit code is that if signal_pending_state() is true. Since our name website was simply wait_event(), we all know that state right here is TASK_UNINTERRUPTIBLE; the definition of signal_pending_state() seems to be like:

static inline int signal_pending_state(unsigned int state, struct task_struct *p)
TASK_WAKEKILL)))
return 0;
if (!signal_pending(p))
return 0;

return (state & TASK_INTERRUPTIBLE)

Our process will not be interruptible, so the primary if fails. Our process ought to have a sign pending, although, proper?

static inline int signal_pending(struct task_struct *p)
{
/*
* TIF_NOTIFY_SIGNAL is not actually a sign, nevertheless it requires the identical
* conduct by way of making certain that we get away of wait loops
* in order that notify sign callbacks may be processed.
*/
if (unlikely(test_tsk_thread_flag(p, TIF_NOTIFY_SIGNAL)))
return 1;
return task_sigpending(p);
}

As the remark notes, TIF_NOTIFY_SIGNAL isn’t related right here, regardless of its identify, however let’s have a look at task_sigpending():

static inline int task_sigpending(struct task_struct *p)
{
return unlikely(test_tsk_thread_flag(p,TIF_SIGPENDING));
}

Hmm. Seems like we should always have that flag set, proper? To determine that out, let’s have a look at how sign supply works. When we’re shutting down the pid namespace in zap_pid_ns_processes(), it does:

group_send_sig_info(SIGKILL, SEND_SIG_PRIV, process, PIDTYPE_MAX);

which finally will get to __send_signal_locked(), which has:

pending = (sort != PIDTYPE_PID) ? &t->signal->shared_pending : &t->pending;
...
sigaddset(&pending->sign, sig);
...
complete_signal(sig, t, sort);

Using PIDTYPE_MAX right here as the sort is slightly bizarre, nevertheless it roughly signifies “this is very privileged kernel stuff sending this signal, you should definitely deliver it”. There is a little bit of unintended consequence right here, although, in that __send_signal_locked() finally ends up sending the SIGKILL to the shared set, as an alternative of the person process’s set. If we have a look at the __fatal_signal_pending() code, we see:

static inline int __fatal_signal_pending(struct task_struct *p)
{
return unlikely(sigismember(&p->pending.sign, SIGKILL));
}

But it seems it is a little bit of a pink herring (though it took a whereas for me to know that).

To perceive what’s actually occurring right here, we have to have a look at complete_signal(), because it unconditionally provides a SIGKILL to the duty’s pending set:

sigaddset(&t->pending.sign, SIGKILL);

however why doesn’t it work? At the highest of the perform we’ve:

/*
* Now discover a thread we are able to get up to take the sign off the queue.
*
* If the principle thread needs the sign, it will get first crack.
* Probably the least stunning to the common bear.
*/
if (wants_signal(sig, p))
t = p;
else if ((sort == PIDTYPE_PID) || thread_group_empty(p))
/*
* There is only one thread and it doesn't should be woken.
* It will dequeue unblocked alerts earlier than it runs once more.
*/
return;

however as Eric Biederman described, mainly each thread can deal with a SIGKILL at any time. Here’s wants_signal():

static inline bool wants_signal(int sig, struct task_struct *p)
!task_sigpending(p);

So… if a thread is already exiting (i.e. it has PF_EXITING), it doesn’t need a sign. Consider the next sequence of occasions:

1. a process opens a FUSE file, and doesn’t shut it, then exits. During that exit, the kernel dutifully calls do_exit(), which does the next:

exit_signals(tsk); /* units PF_EXITING */

2. do_exit() continues on to exit_files(tsk);, which flushes all recordsdata which are nonetheless open, ensuing within the stack hint above.

3. the pid namespace exits, and enters zap_pid_ns_processes(), sends a SIGKILL to everybody (that it expects to be deadly), after which waits for everybody to exit.

4. this kills the FUSE daemon within the pid ns so it may well by no means reply.

5. complete_signal() for the FUSE process that was already exiting ignores the sign, because it has PF_EXITING.

6. Deadlock. Without manually aborting the FUSE connection, issues will cling eternally.

It doesn’t actually make sense to attend for flushes on this case: the duty is dying, so there’s no one to inform the return code of flush() to. It additionally seems that this bug can occur with a number of filesystems (something that calls the kernel’s wait code in flush(), i.e. mainly something that talks to one thing exterior the native kernel).

Individual filesystems will should be patched within the meantime, for instance the repair for FUSE is right here, which was launched on April 23 in Linux 6.3.

While this weblog submit addresses FUSE deadlocks, there are positively points within the nfs code and elsewhere, which we’ve not hit in manufacturing but, however nearly definitely will. You may also see it as a symptom of different filesystem bugs. Something to look out for in case you have a pid namespace that received’t exit.

This is only a small style of the number of unusual points we encounter working containers at scale at Netflix. Our staff is hiring, so please attain out in case you additionally love pink herrings and kernel deadlocks!

LEAVE A REPLY

Please enter your comment!
Please enter your name here