Commit Graph

5 Commits

Author SHA1 Message Date
Kevin Wolf
93913dfd8a qcow2: Use Qcow2Cache in writeback mode during loadvm/savevm
In snapshotting there is no guest involved, so we can safely use a writeback
mode and do the flushes in the right place (i.e. at the very end). This
improves the time that creating/restoring an internal snapshot takes with an
image in writethrough mode.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2011-07-19 15:39:22 +02:00
Jes Sorensen
bf595021c7 Reorganize struct Qcow2Cache for better struct packing
Move size after the two pointers in struct Qcow2Cache to get better
packing of struct elements on 64 bit architectures.

Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2011-01-31 10:03:00 +01:00
Kevin Wolf
3de0a2944b qcow2: Batch flushes for COW
qcow2 calls bdrv_flush() after performing COW in order to ensure that the
L2 table change is never written before the copy is safe on disk. Now that the
L2 table is cached, we can wait with flushing until we write out the next L2
table.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2011-01-24 16:41:49 +01:00
Kevin Wolf
29c1a7301a qcow2: Use QcowCache
Use the new functions of qcow2-cache.c for everything that works on refcount
block and L2 tables.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2011-01-24 16:41:49 +01:00
Kevin Wolf
493810940b qcow2: Add QcowCache
This adds some new cache functions to qcow2 which can be used for caching
refcount blocks and L2 tables. When used with cache=writethrough they work
like the old caching code which is spread all over qcow2, so for this case we
have merely a cleanup.

The interesting case is with writeback caching (this includes cache=none) where
data isn't written to disk immediately but only kept in cache initially. This
leads to some form of metadata write batching which avoids the current "write
to refcount block, flush, write to L2 table" pattern for each single request
when a lot of cluster allocations happen. Instead, cache entries are only
written out if its required to maintain the right order. In the pure cluster
allocation case this means that all metadata updates for requests are done in
memory initially and on sync, first the refcount blocks are written to disk,
then fsync, then L2 tables.

This improves performance of scenarios with lots of cluster allocations
noticably (e.g. installation or after taking a snapshot).

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2011-01-24 11:08:51 +01:00