0ac3367f2a
disks to other domains) from Jed Davis, <jld@panix.com>: * Issue multiple requests when necessary rather than assuming that arbitrary requests can be mapped into single contiguous virtual address ranges. * Don't assume that all data for a request is consecutive in memory. With some client OSes, it's not. The above two changes fix data corruption issues with Linux clients with certain filesystem block sizes. * Gracefully handle memory or pool allocation failures after beginning to handle a request from the ring. * Merge contiguous requests to avoid the "64K turns into 44K + 20K and doubles the transactions per second at the disk" problem caused by the 11-page limit caused by the structure of Xen ring entries. This causes a very slight performance decrease for sequential 64K I/O if the disk is not already saturated with requests (about 1%) but halves the transactions per second we hit the disk with -- or better. It even compensates for bizarre Linux behaviour like breaking long requests up into 5.5K pieces. * Probably some stuff I forgot to mention. Disk throughput (though not latency) is now much, much closer to the "raw hardware" case than it was before. |
||
---|---|---|
.. | ||
xen-public | ||
Makefile | ||
bus.h | ||
bus_private.h | ||
cpu.h | ||
cpufunc.h | ||
ctrl_if.h | ||
evtchn.h | ||
frameasm.h | ||
hypervisor.h | ||
if_xennetvar.h | ||
intr.h | ||
intrdefs.h | ||
isa_machdep.h | ||
kernfs_machdep.h | ||
pci_machdep.h | ||
pic.h | ||
pmap.h | ||
segments.h | ||
xbdvar.h | ||
xen.h | ||
xen_shm.h | ||
xenfunc.h | ||
xenio.h | ||
xenpmap.h |