4ace32e227
356 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Dr. David Alan Gilbert
|
ea0c6d6239 |
test: Postcopy
This is a postcopy test (x86 only) that actually runs the guest and checks the memory contents. The test runs from an x86 boot block with the hex embedded in the test; the source for this is: ........... .code16 .org 0x7c00 .file "fill.s" .text .globl start .type start, @function start: # at 0x7c00 ? cli lgdt gdtdesc mov $1,%eax mov %eax,%cr0 # Protected mode enable data32 ljmp $8,$0x7c20 .org 0x7c20 .code32 # A20 enable - not sure I actually need this inb $0x92,%al or $2,%al outb %al, $0x92 # set up DS for the whole of RAM (needed on KVM) mov $16,%eax mov %eax,%ds mov $65,%ax mov $0x3f8,%dx outb %al,%dx # bl keeps a counter so we limit the output speed mov $0, %bl mainloop: # Start from 1MB mov $(1024*1024),%eax innerloop: incb (%eax) add $4096,%eax cmp $(100*1024*1024),%eax jl innerloop inc %bl jnz mainloop mov $66,%ax mov $0x3f8,%dx outb %al,%dx jmp mainloop # GDT magic from old (GPLv2) Grub startup.S .p2align 2 /* force 4-byte alignment */ gdt: .word 0, 0 .byte 0, 0, 0, 0 /* -- code segment -- * base = 0x00000000, limit = 0xFFFFF (4 KiB Granularity), present * type = 32bit code execute/read, DPL = 0 */ .word 0xFFFF, 0 .byte 0, 0x9A, 0xCF, 0 /* -- data segment -- * base = 0x00000000, limit 0xFFFFF (4 KiB Granularity), present * type = 32 bit data read/write, DPL = 0 */ .word 0xFFFF, 0 .byte 0, 0x92, 0xCF, 0 gdtdesc: .word 0x27 /* limit */ .long gdt /* addr */ /* I'm a bootable disk */ .org 0x7dfe .byte 0x55 .byte 0xAA ........... and that can be assembled by the following magic: as --32 -march=i486 fill.s -o fill.o objcopy -O binary fill.o fill.boot dd if=fill.boot of=bootsect bs=256 count=2 skip=124 xxd -i bootsect Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Marcel Apfelbaum <marcel@redhat.com> Message-id: 1465816605-29488-5-git-send-email-dgilbert@redhat.com Message-Id: <1465816605-29488-5-git-send-email-dgilbert@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com> |
||
Emilio G. Cota
|
896a9ee967 |
qht: add test-qht-par to invoke qht-bench from 'check' target
Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1465412133-3029-14-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> |
||
Emilio G. Cota
|
515864a0d7 |
qht: add qht-bench, a performance benchmark
This serves as a performance benchmark as well as a stress test for QHT. We can tweak quite a number of things, including the number of resize threads and how frequently resizes are triggered. A performance comparison of QHT vs CLHT[1] and ck_hs[2] using this same benchmark program can be found here: http://imgur.com/a/0Bms4 The tests are run on a 64-core AMD Opteron 6376, pinning threads to cores favoring same-socket cores. For each run, qht-bench is invoked with: $ tests/qht-bench -d $duration -n $n -u $u -g $range , where $duration is in seconds, $n is the number of threads, $u is the update rate (0.0 to 100.0), and $range is the number of keys. Note that ck_hs's performance drops significantly as writes go up, since it requires an external lock (I used a ck_spinlock) around every write. Also, note that CLHT instead of using a seqlock, relies on an allocator that does not ever return the same address during the same read-critical section. This gives it a slight performance advantage over QHT on read-heavy workloads, since the seqlock writes aren't there. [1] CLHT: https://github.com/LPD-EPFL/CLHT https://infoscience.epfl.ch/record/207109/files/ascy_asplos15.pdf [2] ck_hs: http://concurrencykit.org/ http://backtrace.io/blog/blog/2015/03/13/workload-specialization/ A few of those plots are shown in text here, since that site might not be online forever. Throughput is on Mops/s on the Y axis. 200K keys, 0 % updates 450 ++--+------+------+-------+-------+-------+-------+------+-------+--++ | + + + + + + + + +N+ | 400 ++ ---+E+ ++ | +++---- | 350 ++ 9 ++------+------++ --+E+ -+H+ ++ | | +H+- | -+N+---- ---- +++ | 300 ++ 8 ++ +E+ ++ -----+E+ --+H+ ++ | | +++ | -+N+-----+H+-- | 250 ++ 7 ++------+------++ +++-----+E+---- ++ 200 ++ 1 -+E+-----+H+ ++ | ---- qht +-E--+ | 150 ++ -+E+ clht +-H--+ ++ | ---- ck +-N--+ | 100 ++ +E+ ++ | ---- | 50 ++ -+E+ ++ | +E+E+ + + + + + + + + | 0 ++--E------+------+-------+-------+-------+-------+------+-------+--++ 1 8 16 24 32 40 48 56 64 Number of threads 200K keys, 1 % updates 350 ++--+------+------+-------+-------+-------+-------+------+-------+--++ | + + + + + + + + -+E+ | 300 ++ -----+H+ ++ | +E+-- | | 9 ++------+------++ +++---- | 250 ++ | +E+ -- | -+E+ ++ | 8 ++ -- ++ ---- | 200 ++ | +++- | +++ ---+E+ ++ | 7 ++------N------++ -+E+-- qht +-E--+ | | 1 +++---- clht +-H--+ | 150 ++ -+E+ ck +-N--+ ++ | ---- | 100 ++ +E+ ++ | ---- | | -+E+ | 50 ++ +H+-+N+----+N+-----+N+------ ++ | +E+E+ + + + +N+-----+N+-----+N+----+N+-----+N+ | 0 ++--E------+------+-------+-------+-------+-------+------+-------+--++ 1 8 16 24 32 40 48 56 64 Number of threads 200K keys, 20 % updates 300 ++--+------+------+-------+-------+-------+-------+------+-------+--++ | + + + + + + + + + | | -+H+ | 250 ++ ---- ++ | 9 ++------+------++ --+H+ ---+E+ | | 8 ++ +H+-- ++ -+H+----+E+-- | 200 ++ | +E+ --| -----+E+-- +++ ++ | 7 ++ + ---- ++ ---+H+---- +++ qht +-E--+ | 150 ++ 6 ++------N------++ -+H+-----+E+ clht +-H--+ ++ | 1 -----+E+-- ck +-N--+ | | -+H+---- | 100 ++ -----+E+ ++ | +E+-- | | ----+++ | 50 ++ -+E+ ++ | +E+ +++ | | +E+N+-+N+-----+ + + + + + + | 0 ++--E------+------N-------N-------N-------N-------N------N-------N--++ 1 8 16 24 32 40 48 56 64 Number of threads 200K keys, 100 % updates qht +-E--+ clht +-H--+ 160 ++--+------+------+-------+-------+-------+-------+---ck-+-N-----+--++ | + + + + + + + + ----H | 140 ++ +H+-- -+E+ ++ | +++---- ---- | 120 ++ 8 ++------+------++ -+H+ +E+ ++ | 7 ++ +H+---- ++ ---- +++---- | 100 ++ | +E+ | +++ ---+H+ -+E+ ++ | 6 ++ +++ ++ -+H+-- +++---- | 80 ++ 5 ++------N----------+E+-----+E+ ++ | 1 -+H+---- +++ | | -----+E+ | 60 ++ +H+---- +++ ++ | ----+E+ | 40 ++ +H+---- ++ | --+E+ | 20 ++ +E+ ++ | +EE+ + + + + + + + + | 0 ++--+N-N---N------N-------N-------N-------N-------N------N-------N--++ 1 8 16 24 32 40 48 56 64 Number of threads Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1465412133-3029-13-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> |
||
Emilio G. Cota
|
1a95404fbd |
qht: add test program
Acked-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1465412133-3029-12-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> |
||
Emilio G. Cota
|
ff9249b733 |
qdist: add test program
Acked-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1465412133-3029-10-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> |
||
Fam Zheng
|
46e7b70699 |
tests: Rename tests/Makefile to tests/Makefile.include
The file is only included from the top Makefile. Rename it to reflect this more obviously. Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1464747811-26917-1-git-send-email-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |