c8a76dbd90
Allowing an unlimited number of clients to any web service is a recipe for a rudimentary denial of service attack: the client merely needs to open lots of sockets without closing them, until qemu no longer has any more fds available to allocate. For qemu-nbd, we default to allowing only 1 connection unless more are explicitly asked for (-e or --shared); this was historically picked as a nice default (without an explicit -t, a non-persistent qemu-nbd goes away after a client disconnects, without needing any additional follow-up commands), and we are not going to change that interface now (besides, someday we want to point people towards qemu-storage-daemon instead of qemu-nbd). But for qemu proper, and the newer qemu-storage-daemon, the QMP nbd-server-start command has historically had a default of unlimited number of connections, in part because unlike qemu-nbd it is inherently persistent until nbd-server-stop. Allowing multiple client sockets is particularly useful for clients that can take advantage of MULTI_CONN (creating parallel sockets to increase throughput), although known clients that do so (such as libnbd's nbdcopy) typically use only 8 or 16 connections (the benefits of scaling diminish once more sockets are competing for kernel attention). Picking a number large enough for typical use cases, but not unlimited, makes it slightly harder for a malicious client to perform a denial of service merely by opening lots of connections withot progressing through the handshake. This change does not eliminate CVE-2024-7409 on its own, but reduces the chance for fd exhaustion or unlimited memory usage as an attack surface. On the other hand, by itself, it makes it more obvious that with a finite limit, we have the problem of an unauthenticated client holding 100 fds opened as a way to block out a legitimate client from being able to connect; thus, later patches will further add timeouts to reject clients that are not making progress. This is an INTENTIONAL change in behavior, and will break any client of nbd-server-start that was not passing an explicit max-connections parameter, yet expects more than 100 simultaneous connections. We are not aware of any such client (as stated above, most clients aware of MULTI_CONN get by just fine on 8 or 16 connections, and probably cope with later connections failing by relying on the earlier connections; libvirt has not yet been passing max-connections, but generally creates NBD servers with the intent for a single client for the sake of live storage migration; meanwhile, the KubeSAN project anticipates a large cluster sharing multiple clients [up to 8 per node, and up to 100 nodes in a cluster], but it currently uses qemu-nbd with an explicit --shared=0 rather than qemu-storage-daemon with nbd-server-start). We considered using a deprecation period (declare that omitting max-parameters is deprecated, and make it mandatory in 3 releases - then we don't need to pick an arbitrary default); that has zero risk of breaking any apps that accidentally depended on more than 100 connections, and where such breakage might not be noticed under unit testing but only under the larger loads of production usage. But it does not close the denial-of-service hole until far into the future, and requires all apps to change to add the parameter even if 100 was good enough. It also has a drawback that any app (like libvirt) that is accidentally relying on an unlimited default should seriously consider their own CVE now, at which point they are going to change to pass explicit max-connections sooner than waiting for 3 qemu releases. Finally, if our changed default breaks an app, that app can always pass in an explicit max-parameters with a larger value. It is also intentional that the HMP interface to nbd-server-start is not changed to expose max-connections (any client needing to fine-tune things should be using QMP). Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20240807174943.771624-12-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> [ericb: Expand commit message to summarize Dan's argument for why we break corner-case back-compat behavior without a deprecation period] Signed-off-by: Eric Blake <eblake@redhat.com> |
||
---|---|---|
.. | ||
export | ||
monitor | ||
accounting.c | ||
aio_task.c | ||
amend.c | ||
backup.c | ||
blkdebug.c | ||
blkio.c | ||
blklogwrites.c | ||
blkreplay.c | ||
blkverify.c | ||
block-backend.c | ||
block-copy.c | ||
block-gen.h | ||
block-ram-registrar.c | ||
bochs.c | ||
cloop.c | ||
commit.c | ||
copy-before-write.c | ||
copy-before-write.h | ||
copy-on-read.c | ||
copy-on-read.h | ||
coroutines.h | ||
create.c | ||
crypto.c | ||
crypto.h | ||
curl.c | ||
dirty-bitmap.c | ||
dmg-bz2.c | ||
dmg-lzfse.c | ||
dmg.c | ||
dmg.h | ||
file-posix.c | ||
file-win32.c | ||
filter-compress.c | ||
gluster.c | ||
graph-lock.c | ||
io_uring.c | ||
io.c | ||
iscsi-opts.c | ||
iscsi.c | ||
linux-aio.c | ||
meson.build | ||
mirror.c | ||
nbd.c | ||
nfs.c | ||
null.c | ||
nvme.c | ||
parallels-ext.c | ||
parallels.c | ||
parallels.h | ||
preallocate.c | ||
progress_meter.c | ||
qapi-sysemu.c | ||
qapi.c | ||
qcow2-bitmap.c | ||
qcow2-cache.c | ||
qcow2-cluster.c | ||
qcow2-refcount.c | ||
qcow2-snapshot.c | ||
qcow2-threads.c | ||
qcow2.c | ||
qcow2.h | ||
qcow.c | ||
qed-check.c | ||
qed-cluster.c | ||
qed-l2-cache.c | ||
qed-table.c | ||
qed.c | ||
qed.h | ||
quorum.c | ||
raw-format.c | ||
rbd.c | ||
replication.c | ||
reqlist.c | ||
snapshot-access.c | ||
snapshot.c | ||
ssh.c | ||
stream.c | ||
throttle-groups.c | ||
throttle.c | ||
trace-events | ||
trace.h | ||
vdi.c | ||
vhdx-endian.c | ||
vhdx-log.c | ||
vhdx.c | ||
vhdx.h | ||
vmdk.c | ||
vpc.c | ||
vvfat.c | ||
win32-aio.c | ||
write-threshold.c |