The Linux nbd driver recently increased the maximum supported request
size up to 32 MB:
commit 078be02b80359a541928c899c2631f39628f56df
Author: Michal Belczyk <belczyk@bsd.krakow.pl>
Date: Tue Apr 30 15:28:28 2013 -0700
nbd: increase default and max request sizes
Raise the default max request size for nbd to 128KB (from 127KB) to get it
4KB aligned. This patch also allows the max request size to be increased
(via /sys/block/nbd<x>/queue/max_sectors_kb) to 32MB.
QEMU's 1 MB buffers are too small to handle these requests.
This patch allocates data buffers dynamically and allows up to 32 MB per
request.
Reported-by: Nick Thomas <nick@bytemark.co.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use GLib's efficient slice allocator instead of open-coding the request
freelist. This patch simplifies the NBDRequest code.
Now we qemu_blockalign() the req->data buffer each time but the next
patch switches from a fixed size buffer to a dynamic size anyway.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The fcntl(fd, F_SETFL, O_NONBLOCK) flag is not specific to sockets.
Rename to qemu_set_nonblock() just like qemu_set_cloexec().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
The NBD block supports an URL syntax, for which a URL parser returns
separate hostname and port fields. It also supports the traditional qemu
syntax encoded in a filename. Until now, after parsing the URL to get
each piece of information, a new string is built to be fed to socket
functions.
Instead of building a string in the URL case that is immediately parsed
again, parse the string in both cases and use the QemuOpts interface to
qemu-sockets.c.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We do not need BLKROSET if the kernel supports setting flags.
Also, always do BLKROSET even for a read-write export, otherwise
the read-only state remains "sticky" after the invocation of
"qemu-nbd -r".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Before:
$ qemu-system-x86_64 nbd:localhost:12345
inet_connect_opts: connect(ipv4,yakj.usersys.redhat.com,127.0.0.1,12345): Connection refused
qemu-system-x86_64: could not open disk image nbd:localhost:12345: Connection refused
After:
$ x86_64-softmmu/qemu-system-x86_64 nbd:localhost:12345
qemu-system-x86_64: Failed to connect to socket: Connection refused
qemu-system-x86_64: could not open disk image nbd:localhost:12345: Connection refused
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This lets me adjust the clients to do proper error propagation first,
thus avoiding temporary regressions in the quality of the error messages.
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
No need to add non blocking parameters to the blocking inet_connect
add block parameter for inet_connect_opts instead of using QemuOpt "block".
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Allow negotiation to receive the name of the requested export from
the client. Passing a NULL export to nbd_client_new will cause
the server to send the extended negotiation header. The exp field
is then filled during negotiation.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to exit cleanly from qemu-nbd, add a callback that triggers
when an NBDExport is closed. In the case of qemu-nbd it will exit the
main loop.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We will use a similar two-phase destruction for NBDExport, so we need
each NBDClient to add a reference to NBDExport.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because nbd_client_close removes the I/O handlers for the client
socket, there is no way that any suspended coroutines are restarted.
This will be a problem with the QEMU embedded NBD server, because
we will have a QMP command to forcibly close all connections with
the clients.
Instead, we can exploit the reference counting of NBDClients; shutdown the
client socket, which will make it readable and writeable. Also call the
close callback, which will release the user's reference. The coroutines
then will fail and exit cleanly, and release all remaining references,
until the last refcount finally triggers the closure of the client.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's used to indicate the special case where a valid file-descriptor
is returned (ie. success) but the connection can't be completed
w/o blocking.
This is needed because QERR_SOCKET_CONNECT_IN_PROGRESS is not
treated like an error and a future commit will drop it.
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Add a new argument in inet_listen()/inet_listen_opts()
to pass back listen error.
Change nbd, qemu-char, vnc to use new interface.
Signed-off-by: Amos Kong <akong@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add a bool argument to inet_connect() to assign if set socket
to block/nonblock, and delete original argument 'socktype'
that is unused.
Add a new argument to inet_connect()/inet_connect_opts(),
to pass back connect error by error class.
Retry to connect when -EINTR is got. Connect's successful
for nonblock socket when following errors are got, user
should wait for connecting by select():
-EINPROGRESS
-EWOULDBLOCK (win32)
-WSAEALREADY (win32)
Change nbd, vnc to use new interface.
Signed-off-by: Amos Kong <akong@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Right now, nbd_wr_sync will hang if no data at all is available on the
socket and the other side is not going to provide any. Relax this by
making it loop only for writes or partial reads. This fixes a race
where one thread is executing qemu_aio_wait() and another is executing
main_loop_wait(). Then, the select() call in main_loop_wait() can return
stale data and call the "readable" callback with no data in the socket.
Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In the next patch we need to look at the return code of nbd_wr_sync.
To avoid percolating the socket_error() ugliness all around, let's
handle errors by returning negative errno values.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
GCC (pedantically, but correctly) considers that a negative ssize_t may
become positive when casted to int. This may cause uninitialized variable
warnings when a function returns such a negative ssize_t and is inlined.
Propagate ssize_t return types to avoid this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Limiting the number of in-flight requests is implemented very simply
with a can_read callback. It does not require a semaphore, unlike the
client side in block/nbd.c, because we can throttle directly the creation
of coroutines. The client side can have a coroutine created at any time
when an I/O request is made.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using coroutines enable asynchronous operation on both the network and
the block side. Network can be owned by two coroutines at the same time,
one writing and one reading. On the send side, mutual exclusion is
guaranteed by a CoMutex. On the receive side, mutual exclusion is
guaranteed because new coroutines immediately start receiving data,
and no new coroutines are created as long as the previous one is receiving.
Between receive and send, qemu-nbd can have an arbitrary number of
in-flight block transfers. Throttling is implemented by the next
patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
By attaching a client to an NBDRequest, we can avoid passing around the
socket descriptor and data buffer.
Also, we can now manage the reference count for the client in
nbd_request_get/put request instead of having to do it ourselved in
nbd_read. This simplifies things when coroutines are used.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch sets up the fd handler in nbd.c instead of qemu-nbd.c. It
introduces NBDClient, which wraps the arguments to nbd_trip in a single
structure, so that we can add a notifier to it. This way, qemu-nbd can
know about disconnections.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the buffer from NBDExport to a new structure, so that it will be
possible to have multiple in-flight requests for the same export
(and for the same client too---we get that for free).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Group the sending of a reply and the associated data into a new function.
Without corking, the caller would be forced to leave 12 free bytes at the
beginning of the data pointer. Not too ugly, but still ugly. :)
Using nbd_do_send_reply everywhere will help when the routine will set up
the write handler that re-enters the send coroutine.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use TCP_CORK to remove a violation of encapsulation, that would later
require nbd_trip to know too much about an NBD reply.
We could also switch to sendmsg (qemu_co_sendv) later, it is even
easier once coroutines are in.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Update ioctl(s) in nbd_init() to detect device busy early.
Current nbd_init() issues NBD_CLEAR_SOCKET before NBD_SET_SOCKET, if issuing
"qemu-nbd -c /dev/nbd0 disk.img" twice, the second time won't detect EBUSY in
nbd_init(), but in nbd_client will report EBUSY and do clear socket (the 1st
time command will be affacted too because of no socket any more.)
No change to previous version.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>