qemu/nbd/trace-events

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

83 lines
7.9 KiB
Plaintext
Raw Permalink Normal View History

# See docs/devel/tracing.rst for syntax documentation.
# client.c
nbd_send_option_request(uint32_t opt, const char *name, uint32_t len) "Sending option request %" PRIu32" (%s), len %" PRIu32
nbd_receive_option_reply(uint32_t option, const char *optname, uint32_t type, const char *typename, uint32_t length) "Received option reply %" PRIu32" (%s), type %" PRIu32" (%s), len %" PRIu32
nbd_server_error_msg(uint32_t err, const char *type, const char *msg) "server reported error 0x%" PRIx32 " (%s) with additional message: %s"
nbd_reply_err_ignored(uint32_t option, const char *name, uint32_t reply, const char *reply_name) "server failed request %" PRIu32 " (%s) with error 0x%" PRIx32 " (%s), attempting fallback"
nbd/client: Refactor nbd_receive_list() Right now, nbd_receive_list() is only called by nbd_receive_query_exports(), which in turn is only called if the server lacks NBD_OPT_GO but has working option negotiation, and is merely used as a quality-of-implementation trick since servers can't give decent errors for NBD_OPT_EXPORT_NAME. However, servers that lack NBD_OPT_GO are becoming increasingly rare (nbdkit was a latecomer, in Aug 2018, but qemu has been such a server since commit f37708f6 in July 2017 and released in 2.10), so it no longer makes sense to micro-optimize that function for performance. Furthermore, when debugging a server's implementation, tracing the full reply (both names and descriptions) is useful, not to mention that upcoming patches adding 'qemu-nbd --list' will want to collect that data. And when you consider that a server can send an export name up to the NBD protocol length limit of 4k; but our current NBD_MAX_NAME_SIZE is only 256, we can't trace all valid server names without more storage, but 4k is large enough that the heap is better than the stack for long names. Thus, I'm changing the division of labor, with nbd_receive_list() now always malloc'ing a result on success (the malloc is bounded by the fact that we reject servers with a reply length larger than 32M), and moving the comparison to 'wantname' to the caller. There is a minor change in behavior where a server with 0 exports (an immediate NBD_REP_ACK reply) is now no longer distinguished from a server without LIST support (NBD_REP_ERR_UNSUP); this information could be preserved with a complication to the calling contract to provide a bit more information, but I didn't see the point. After all, the worst that can happen if our guess at a match is wrong is that the caller will get a cryptic disconnect when NBD_OPT_EXPORT_NAME fails (which is no different from what would happen if we had not tried LIST), while treating an empty list as immediate failure would prevent connecting to really old servers that really did lack LIST. Besides, NBD servers with 0 exports are rare (qemu can do it when using QMP nbd-server-start without nbd-server-add - but qemu understands NBD_OPT_GO and thus won't tickle this change in behavior). Fix the spelling of foundExport to match coding standards while in the area. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-9-eblake@redhat.com>
2019-01-17 22:36:45 +03:00
nbd_receive_list(const char *name, const char *desc) "export list includes '%s', description '%s'"
nbd_opt_info_go_start(const char *opt, const char *name) "Attempting %s for export '%s'"
nbd_opt_info_go_success(const char *opt) "Export is ready after %s request"
nbd_opt_info_unknown(int info, const char *name) "Ignoring unknown info %d (%s)"
nbd_opt_info_block_size(uint32_t minimum, uint32_t preferred, uint32_t maximum) "Block sizes are 0x%" PRIx32 ", 0x%" PRIx32 ", 0x%" PRIx32
nbd_receive_query_exports_start(const char *wantname) "Querying export list for '%s'"
nbd_receive_query_exports_success(const char *wantname) "Found desired export name '%s'"
nbd_receive_starttls_new_client(void) "Setting up TLS"
nbd_receive_starttls_tls_handshake(void) "Starting TLS handshake"
nbd_opt_meta_request(const char *optname, const char *context, const char *export) "Requesting %s %s for export %s"
nbd_opt_meta_reply(const char *optname, const char *context, uint32_t id) "Received %s mapping of %s to id %" PRIu32
nbd_start_negotiate(void *tlscreds, const char *hostname) "Receiving negotiation tlscreds=%p hostname=%s"
nbd_receive_negotiate_magic(uint64_t magic) "Magic is 0x%" PRIx64
nbd_receive_negotiate_server_flags(uint32_t globalflags) "Global flags are 0x%" PRIx32
nbd_receive_negotiate_name(const char *name) "Requesting NBD export name '%s'"
nbd_receive_negotiate_size_flags(uint64_t size, uint16_t flags) "Size is %" PRIu64 ", export flags 0x%" PRIx16
nbd_init_set_socket(void) "Setting NBD socket"
nbd_init_set_block_size(unsigned long block_size) "Setting block size to %lu"
nbd_init_set_size(unsigned long sectors) "Setting size to %lu block(s)"
nbd_init_trailing_bytes(int ignored_bytes) "Ignoring trailing %d bytes of export"
nbd_init_set_readonly(void) "Setting readonly attribute"
nbd_init_finish(void) "Negotiation ended"
nbd_client_loop(void) "Doing NBD loop"
nbd_client_loop_ret(int ret, const char *error) "NBD loop returned %d: %s"
nbd_client_clear_queue(void) "Clearing NBD queue"
nbd_client_clear_socket(void) "Clearing NBD socket"
nbd_send_request(uint64_t from, uint64_t len, uint64_t cookie, uint16_t flags, uint16_t type, const char *name) "Sending request to server: { .from = %" PRIu64", .len = %" PRIu64 ", .cookie = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) }"
nbd_receive_simple_reply(int32_t error, const char *errname, uint64_t cookie) "Got simple reply: { .error = %" PRId32 " (%s), cookie = %" PRIu64" }"
nbd_receive_reply_chunk_header(uint16_t flags, uint16_t type, const char *name, uint64_t cookie, uint32_t length) "Got reply chunk header: { flags = 0x%" PRIx16 ", type = %" PRIu16 " (%s), cookie = %" PRIu64 ", length = %" PRIu32 " }"
nbd_receive_wrong_header(uint32_t magic, const char *mode) "Server sent unexpected magic 0x%" PRIx32 " for negotiated mode %s"
# common.c
nbd_unknown_error(int err) "Squashing unexpected error %d to EINVAL"
# server.c
nbd_negotiate_send_rep_len(uint32_t opt, const char *optname, uint32_t type, const char *typename, uint32_t len) "Reply opt=%" PRIu32 " (%s), type=%" PRIu32 " (%s), len=%" PRIu32
nbd_negotiate_send_rep_err(const char *msg) "sending error message \"%s\""
nbd_negotiate_send_rep_list(const char *name, const char *desc) "Advertising export name '%s' description '%s'"
nbd_negotiate_handle_export_name(void) "Checking length"
nbd_negotiate_handle_export_name_request(const char *name) "Client requested export '%s'"
nbd_negotiate_send_info(int info, const char *name, uint32_t length) "Sending NBD_REP_INFO type %d (%s) with remaining length %" PRIu32
nbd_negotiate_handle_info_requests(int requests) "Client requested %d items of info"
nbd_negotiate_handle_info_request(int request, const char *name) "Client requested info %d (%s)"
nbd_negotiate_handle_info_block_size(uint32_t minimum, uint32_t preferred, uint32_t maximum) "advertising minimum 0x%" PRIx32 ", preferred 0x%" PRIx32 ", maximum 0x%" PRIx32
nbd_negotiate_handle_starttls(void) "Setting up TLS"
nbd_negotiate_handle_starttls_handshake(void) "Starting TLS handshake"
nbd_negotiate_meta_context(const char *optname, const char *export, uint32_t queries) "Client requested %s for export %s, with %" PRIu32 " queries"
nbd_negotiate_meta_query_skip(const char *reason) "Skipping meta query: %s"
nbd_negotiate_meta_query_parse(const char *query) "Parsed meta query '%s'"
nbd_negotiate_meta_query_reply(const char *context, uint32_t id) "Replying with meta context '%s' id %" PRIu32
nbd_negotiate_options_flags(uint32_t flags) "Received client flags 0x%" PRIx32
nbd_negotiate_options_check_magic(uint64_t magic) "Checking opts magic 0x%" PRIx64
nbd_negotiate_options_check_option(uint32_t option, const char *name) "Checking option %" PRIu32 " (%s)"
nbd_negotiate_begin(void) "Beginning negotiation"
nbd_negotiate_new_style_size_flags(uint64_t size, unsigned flags) "advertising size %" PRIu64 " and flags 0x%x"
nbd_negotiate_success(void) "Negotiation succeeded"
nbd_receive_request(uint32_t magic, uint16_t flags, uint16_t type, uint64_t from, uint64_t len) "Got request: { magic = 0x%" PRIx32 ", .flags = 0x%" PRIx16 ", .type = 0x%" PRIx16 ", from = %" PRIu64 ", len = %" PRIu64 " }"
nbd_blk_aio_attached(const char *name, void *ctx) "Export %s: Attaching clients to AIO context %p"
nbd_blk_aio_detach(const char *name, void *ctx) "Export %s: Detaching clients from AIO context %p"
nbd_co_send_simple_reply(uint64_t cookie, uint32_t error, const char *errname, uint64_t len) "Send simple reply: cookie = %" PRIu64 ", error = %" PRIu32 " (%s), len = %" PRIu64
nbd_co_send_chunk_done(uint64_t cookie) "Send structured reply done: cookie = %" PRIu64
nbd_co_send_chunk_read(uint64_t cookie, uint64_t offset, void *data, uint64_t size) "Send structured read data reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", data = %p, len = %" PRIu64
nbd_co_send_chunk_read_hole(uint64_t cookie, uint64_t offset, uint64_t size) "Send structured read hole reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", len = %" PRIu64
nbd_co_send_extents(uint64_t cookie, unsigned int extents, uint32_t id, uint64_t length, int last) "Send block status reply: cookie = %" PRIu64 ", extents = %u, context = %d (extents cover %" PRIu64 " bytes, last chunk = %d)"
nbd_co_send_chunk_error(uint64_t cookie, int err, const char *errname, const char *msg) "Send structured error reply: cookie = %" PRIu64 ", error = %d (%s), msg = '%s'"
nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS Allow a client to request a subset of negotiated meta contexts. For example, a client may ask to use a single connection to learn about both block status and dirty bitmaps, but where the dirty bitmap queries only need to be performed on a subset of the disk; forcing the server to compute that information on block status queries in the rest of the disk is wasted effort (both at the server, and on the amount of traffic sent over the wire to be parsed and ignored by the client). Qemu as an NBD client never requests to use more than one meta context, so it has no need to use block status payloads. Testing this instead requires support from libnbd, which CAN access multiple meta contexts in parallel from a single NBD connection; an interop test submitted to the libnbd project at the same time as this patch demonstrates the feature working, as well as testing some corner cases (for example, when the payload length is longer than the export length), although other corner cases (like passing the same id duplicated) requires a protocol fuzzer because libnbd is not wired up to break the protocol that badly. This also includes tweaks to 'qemu-nbd --list' to show when a server is advertising the capability, and to the testsuite to reflect the addition to that output. Of note: qemu will always advertise the new feature bit during NBD_OPT_INFO if extended headers have alreay been negotiated (regardless of whether any NBD_OPT_SET_META_CONTEXT negotiation has occurred); but for NBD_OPT_GO, qemu only advertises the feature if block status is also enabled (that is, if the client does not negotiate any contexts, then NBD_CMD_BLOCK_STATUS cannot be used, so the feature is not advertised). Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230925192229.3186470-26-eblake@redhat.com> [eblake: fix logic to reject unnegotiated contexts] Signed-off-by: Eric Blake <eblake@redhat.com>
2023-09-25 22:22:42 +03:00
nbd_co_receive_block_status_payload_compliance(uint64_t from, uint64_t len) "client sent unusable block status payload: from=0x%" PRIx64 ", len=0x%" PRIx64
nbd_co_receive_request_decode_type(uint64_t cookie, uint16_t type, const char *name) "Decoding type: cookie = %" PRIu64 ", type = %" PRIu16 " (%s)"
nbd_co_receive_request_payload_received(uint64_t cookie, uint64_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu64
nbd/server: Support a request payload Upcoming additions to support NBD 64-bit effect lengths allow for the possibility to distinguish between payload length (capped at 32M) and effect length (64 bits, although we generally assume 63 bits because of off_t limitations). Without that extension, only the NBD_CMD_WRITE request has a payload; but with the extension, it makes sense to allow at least NBD_CMD_BLOCK_STATUS to have both a payload and effect length in a future patch (where the payload is a limited-size struct that in turn gives the real effect length as well as a subset of known ids for which status is requested). Other future NBD commands may also have a request payload, so the 64-bit extension introduces a new NBD_CMD_FLAG_PAYLOAD_LEN that distinguishes between whether the header length is a payload length or an effect length, rather than hard-coding the decision based on the command. According to the spec, a client should never send a command with a payload without the negotiation phase proving such extension is available. So in the unlikely event the bit is set or cleared incorrectly, the client is already at fault; if the client then provides the payload, we can gracefully consume it off the wire and fail the command with NBD_EINVAL (subsequent checks for magic numbers ensure we are still in sync), while if the client fails to send payload we block waiting for it (basically deadlocking our connection to the bad client, but not negatively impacting our ability to service other clients, so not a security risk). Note that we do not support the payload version of BLOCK_STATUS yet. This patch also fixes a latent bug introduced in b2578459: once request->len can be 64 bits, assigning it to a 32-bit payload_len can cause wraparound to 0 which then sets req->complete prematurely; thankfully, the bug was not possible back then (it takes this and later patches to even allow request->len larger than 32 bits; and since previously the only 'payload_len = request->len' assignment was in NBD_CMD_WRITE which also sets check_length, which in turn rejects lengths larger than 32M before relying on any possibly-truncated value stored in payload_len). Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230925192229.3186470-15-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> [eblake: enhance comment on handling client error, fix type bug] Signed-off-by: Eric Blake <eblake@redhat.com>
2023-09-25 22:22:31 +03:00
nbd_co_receive_ext_payload_compliance(uint64_t from, uint64_t len) "client sent non-compliant write without payload flag: from=0x%" PRIx64 ", len=0x%" PRIx64
nbd_co_receive_align_compliance(const char *op, uint64_t from, uint64_t len, uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx64 ", align=0x%" PRIx32
nbd_trip(void) "Reading request"
nbd/server: CVE-2024-7409: Drop non-negotiating clients A client that opens a socket but does not negotiate is merely hogging qemu's resources (an open fd and a small amount of memory); and a malicious client that can access the port where NBD is listening can attempt a denial of service attack by intentionally opening and abandoning lots of unfinished connections. The previous patch put a default bound on the number of such ongoing connections, but once that limit is hit, no more clients can connect (including legitimate ones). The solution is to insist that clients complete handshake within a reasonable time limit, defaulting to 10 seconds. A client that has not successfully completed NBD_OPT_GO by then (including the case of where the client didn't know TLS credentials to even reach the point of NBD_OPT_GO) is wasting our time and does not deserve to stay connected. Later patches will allow fine-tuning the limit away from the default value (including disabling it for doing integration testing of the handshake process itself). Note that this patch in isolation actually makes it more likely to see qemu SEGV after nbd-server-stop, as any client socket still connected when the server shuts down will now be closed after 10 seconds rather than at the client's whims. That will be addressed in the next patch. For a demo of this patch in action: $ qemu-nbd -f raw -r -t -e 10 file & $ nbdsh --opt-mode -c ' H = list() for i in range(20): print(i) H.insert(i, nbd.NBD()) H[i].set_opt_mode(True) H[i].connect_uri("nbd://localhost") ' $ kill $! where later connections get to start progressing once earlier ones are forcefully dropped for taking too long, rather than hanging. Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20240807174943.771624-13-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> [eblake: rebase to changes earlier in series, reduce scope of timer] Signed-off-by: Eric Blake <eblake@redhat.com>
2024-08-09 00:05:08 +03:00
nbd_handshake_timer_cb(void) "client took too long to negotiate"
# client-connection.c
nbd_connect_thread_sleep(uint64_t timeout) "timeout %" PRIu64