> xm list
Domain-Unnamed 1 467 1 ---s-- 46.0
The root cause is a discrepancy in the error *value* codes:
BSD uses the AT&T Unix Version 6 error codes, while Xen uses
Unix System V error codes (or actually what Linux/i386 has taken over from it).
After shutting down (or rebooting) a domU, the guest container gets destroyed.
This implies freeing resources used by the guest (RAM, internal management structures, etc.).
The destroy process is an asynchronous process in order to not block the Dom0 (and other DomUs).
The destroy process works this way:
The XEN_DOMCTL_destroydomain is invoked from the xentools (python, libxc code).
XEN_DOMCTL_destroydomain hypercall calls domain_kill().
domain_kill() calls domain_relinquish_resources().
domain_relinquish_resources() calls relinquish_memory().
relinquish_memory() calls hypercall_preempt_check().
hypercall_preempt_check() makes all this asynchronous.
It fails, if there's an other hypercall pending.
In that case relinquish_memory() returns EAGAIN, which means, just retry to continue the destroy process.
EAGAIN is passed through the return path back into the python code
(= userspace). The python code checks for EAGAIN and *should*
retry, but it didn't.
In Unix System V / Linux, EAGAIN has the error code value 11.
In BSD, EAGAIN has the error code value 35 and EDEADLK has the error code value 11.
This means, Xen returning EAGAIN means for the python code EDEADLK.
This lead to the confusing error message 'domain destroy failed due to Resource Deadlock avoided'.
We finally convert the error code from the Xen hypercall to BSD before passing it upstream.
Since we have to treat 0x0 as a valid page, this got broken in rev. 1.26.
Introduce INVALID_PAGE as magic value and restore the check.
This unbreaks IOCTL_PRIVCMD_MMAPBATCH while allowing to launch HVM guests.
synchronously, but insert a (varying) delay. Before we have only been
decoupled from the peer via network latency - now we introduce some
explicit delay. This, at least, creates batter serialized debug output.
However, if we have to reconnect because of an authentication failure,
the peer may have just been unable to access it's radius server. (I have
a setup where this seems to happen every now and then, depending on time
of day.) Backoff reconnect in this cases seriously longer - this is better
than hitting the max-auth-failure limit within a few seconds.
- Protect RX queue free list with a mutex, as it was done in so many
network drivers now that it calls for common code, as dyoung@ points
out.
However, for now it should improve a bit iwn(4)'s stability.
configures an audio device correctly for a device which is already
plugged.
* usb_subr.c
Add locators parameter to usbd_attachinterfaces()
Add usbd_reatach_device()
* usbdivar.h
Export usbd_reatach_device()
hvm: Avoid need for ugly setcpucontext() in HVM domain builder by
pre-setting the vcpu0 to runnable inside Xen, and have the builder
insert a JMP instruction to reach the hvmloader entry point from
address 0x0.
So we have to treat guest physical address 0x0 like every one
or we end in a page fault loop when launching a HVM guest, otherwise.
XXX Keep this for Xen2 as this change hasn't been tested there.