qemu/tests/qemu-iotests/281

325 lines
12 KiB
Plaintext
Raw Normal View History

#!/usr/bin/env python3
iotests/281: Test lingering timers Prior to "block/nbd: Delete reconnect delay timer when done" and "block/nbd: Delete open timer when done", both of those timers would remain scheduled even after successfully (re-)connecting to the server, and they would not even be deleted when the BDS is deleted. This test constructs exactly this situation: (1) Configure an @open-timeout, so the open timer is armed, and (2) Configure a @reconnect-delay and trigger a reconnect situation (which succeeds immediately), so the reconnect delay timer is armed. Then we immediately delete the BDS, and sleep for longer than the @open-timeout and @reconnect-delay. Prior to said patches, this caused one (or both) of the timer CBs to access already-freed data. Accessing freed data may or may not crash, so this test can produce false successes, but I do not know how to show the problem in a better or more reliable way. If you run this test on "block/nbd: Assert there are no timers when closed" and without the fix patches mentioned above, you should reliably see an assertion failure. (But all other tests that use the reconnect delay timer (264 and 277) will fail in that configuration, too; as will nbd-reconnect-on-open, which uses the open timer.) Remove this test from the quick group because of the two second sleep this patch introduces. (I decided to put this test case into 281, because the main bug this series addresses is in the interaction of the NBD block driver and I/O threads, which is precisely the scope of 281. The test case for that other bug will also be put into the test class added here. Also, excuse the test class's name, I couldn't come up with anything better. The "yield" part will make sense two patches from now.) Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2022-02-04 14:10:10 +03:00
# group: rw
#
# Test cases for blockdev + IOThread interactions
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
iotests/281: Test lingering timers Prior to "block/nbd: Delete reconnect delay timer when done" and "block/nbd: Delete open timer when done", both of those timers would remain scheduled even after successfully (re-)connecting to the server, and they would not even be deleted when the BDS is deleted. This test constructs exactly this situation: (1) Configure an @open-timeout, so the open timer is armed, and (2) Configure a @reconnect-delay and trigger a reconnect situation (which succeeds immediately), so the reconnect delay timer is armed. Then we immediately delete the BDS, and sleep for longer than the @open-timeout and @reconnect-delay. Prior to said patches, this caused one (or both) of the timer CBs to access already-freed data. Accessing freed data may or may not crash, so this test can produce false successes, but I do not know how to show the problem in a better or more reliable way. If you run this test on "block/nbd: Assert there are no timers when closed" and without the fix patches mentioned above, you should reliably see an assertion failure. (But all other tests that use the reconnect delay timer (264 and 277) will fail in that configuration, too; as will nbd-reconnect-on-open, which uses the open timer.) Remove this test from the quick group because of the two second sleep this patch introduces. (I decided to put this test case into 281, because the main bug this series addresses is in the interaction of the NBD block driver and I/O threads, which is precisely the scope of 281. The test case for that other bug will also be put into the test class added here. Also, excuse the test class's name, I couldn't come up with anything better. The "yield" part will make sense two patches from now.) Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2022-02-04 14:10:10 +03:00
import time
import iotests
iotests/281: Test lingering timers Prior to "block/nbd: Delete reconnect delay timer when done" and "block/nbd: Delete open timer when done", both of those timers would remain scheduled even after successfully (re-)connecting to the server, and they would not even be deleted when the BDS is deleted. This test constructs exactly this situation: (1) Configure an @open-timeout, so the open timer is armed, and (2) Configure a @reconnect-delay and trigger a reconnect situation (which succeeds immediately), so the reconnect delay timer is armed. Then we immediately delete the BDS, and sleep for longer than the @open-timeout and @reconnect-delay. Prior to said patches, this caused one (or both) of the timer CBs to access already-freed data. Accessing freed data may or may not crash, so this test can produce false successes, but I do not know how to show the problem in a better or more reliable way. If you run this test on "block/nbd: Assert there are no timers when closed" and without the fix patches mentioned above, you should reliably see an assertion failure. (But all other tests that use the reconnect delay timer (264 and 277) will fail in that configuration, too; as will nbd-reconnect-on-open, which uses the open timer.) Remove this test from the quick group because of the two second sleep this patch introduces. (I decided to put this test case into 281, because the main bug this series addresses is in the interaction of the NBD block driver and I/O threads, which is precisely the scope of 281. The test case for that other bug will also be put into the test class added here. Also, excuse the test class's name, I couldn't come up with anything better. The "yield" part will make sense two patches from now.) Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2022-02-04 14:10:10 +03:00
from iotests import qemu_img, QemuStorageDaemon
image_len = 64 * 1024 * 1024
# Test for RHBZ#1782175
class TestDirtyBitmapIOThread(iotests.QMPTestCase):
drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
images = { 'drive0': drive0_img }
def setUp(self):
for name in self.images:
qemu_img('create', '-f', iotests.imgfmt,
self.images[name], str(image_len))
self.vm = iotests.VM()
self.vm.add_object('iothread,id=iothread0')
for name in self.images:
self.vm.add_blockdev('driver=file,filename=%s,node-name=file_%s'
% (self.images[name], name))
self.vm.add_blockdev('driver=qcow2,file=file_%s,node-name=%s'
% (name, name))
self.vm.launch()
self.vm.qmp('x-blockdev-set-iothread',
node_name='drive0', iothread='iothread0',
force=True)
def tearDown(self):
self.vm.shutdown()
for name in self.images:
os.remove(self.images[name])
def test_add_dirty_bitmap(self):
result = self.vm.qmp(
'block-dirty-bitmap-add',
node='drive0',
name='bitmap1',
persistent=True,
)
self.assert_qmp(result, 'return', {})
# Test for RHBZ#1746217 & RHBZ#1773517
class TestNBDMirrorIOThread(iotests.QMPTestCase):
nbd_sock = os.path.join(iotests.sock_dir, 'nbd.sock')
drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
mirror_img = os.path.join(iotests.test_dir, 'mirror.img')
images = { 'drive0': drive0_img, 'mirror': mirror_img }
def setUp(self):
for name in self.images:
qemu_img('create', '-f', iotests.imgfmt,
self.images[name], str(image_len))
self.vm_src = iotests.VM(path_suffix='src')
self.vm_src.add_object('iothread,id=iothread0')
self.vm_src.add_blockdev('driver=file,filename=%s,node-name=file0'
% (self.drive0_img))
self.vm_src.add_blockdev('driver=qcow2,file=file0,node-name=drive0')
self.vm_src.launch()
self.vm_src.qmp('x-blockdev-set-iothread',
node_name='drive0', iothread='iothread0',
force=True)
self.vm_tgt = iotests.VM(path_suffix='tgt')
self.vm_tgt.add_object('iothread,id=iothread0')
self.vm_tgt.add_blockdev('driver=file,filename=%s,node-name=file0'
% (self.mirror_img))
self.vm_tgt.add_blockdev('driver=qcow2,file=file0,node-name=drive0')
self.vm_tgt.launch()
self.vm_tgt.qmp('x-blockdev-set-iothread',
node_name='drive0', iothread='iothread0',
force=True)
def tearDown(self):
self.vm_src.shutdown()
self.vm_tgt.shutdown()
for name in self.images:
os.remove(self.images[name])
def test_nbd_mirror(self):
result = self.vm_tgt.qmp(
'nbd-server-start',
addr={
'type': 'unix',
'data': { 'path': self.nbd_sock }
}
)
self.assert_qmp(result, 'return', {})
result = self.vm_tgt.qmp(
'nbd-server-add',
device='drive0',
writable=True
)
self.assert_qmp(result, 'return', {})
result = self.vm_src.qmp(
'drive-mirror',
device='drive0',
target='nbd+unix:///drive0?socket=' + self.nbd_sock,
sync='full',
mode='existing',
speed=64*1024*1024,
job_id='j1'
)
self.assert_qmp(result, 'return', {})
self.vm_src.event_wait(name="BLOCK_JOB_READY")
# Test for RHBZ#1779036
class TestExternalSnapshotAbort(iotests.QMPTestCase):
drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
snapshot_img = os.path.join(iotests.test_dir, 'snapshot.img')
images = { 'drive0': drive0_img, 'snapshot': snapshot_img }
def setUp(self):
for name in self.images:
qemu_img('create', '-f', iotests.imgfmt,
self.images[name], str(image_len))
self.vm = iotests.VM()
self.vm.add_object('iothread,id=iothread0')
self.vm.add_blockdev('driver=file,filename=%s,node-name=file0'
% (self.drive0_img))
self.vm.add_blockdev('driver=qcow2,file=file0,node-name=drive0')
self.vm.launch()
self.vm.qmp('x-blockdev-set-iothread',
node_name='drive0', iothread='iothread0',
force=True)
def tearDown(self):
self.vm.shutdown()
for name in self.images:
os.remove(self.images[name])
def test_external_snapshot_abort(self):
# Use a two actions transaction with a bogus values on the second
# one to trigger an abort of the transaction.
result = self.vm.qmp('transaction', actions=[
{
'type': 'blockdev-snapshot-sync',
'data': { 'node-name': 'drive0',
'snapshot-file': self.snapshot_img,
'snapshot-node-name': 'snap1',
'mode': 'absolute-paths',
'format': 'qcow2' }
},
{
'type': 'blockdev-snapshot-sync',
'data': { 'node-name': 'drive0',
'snapshot-file': '/fakesnapshot',
'snapshot-node-name': 'snap2',
'mode': 'absolute-paths',
'format': 'qcow2' }
},
])
# Crashes on failure, we expect this error.
self.assert_qmp(result, 'error/class', 'GenericError')
# Test for RHBZ#1782111
class TestBlockdevBackupAbort(iotests.QMPTestCase):
drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
drive1_img = os.path.join(iotests.test_dir, 'drive1.img')
snap0_img = os.path.join(iotests.test_dir, 'snap0.img')
snap1_img = os.path.join(iotests.test_dir, 'snap1.img')
images = { 'drive0': drive0_img,
'drive1': drive1_img,
'snap0': snap0_img,
'snap1': snap1_img }
def setUp(self):
for name in self.images:
qemu_img('create', '-f', iotests.imgfmt,
self.images[name], str(image_len))
self.vm = iotests.VM()
self.vm.add_object('iothread,id=iothread0')
self.vm.add_device('virtio-scsi,iothread=iothread0')
for name in self.images:
self.vm.add_blockdev('driver=file,filename=%s,node-name=file_%s'
% (self.images[name], name))
self.vm.add_blockdev('driver=qcow2,file=file_%s,node-name=%s'
% (name, name))
self.vm.add_device('scsi-hd,drive=drive0')
self.vm.add_device('scsi-hd,drive=drive1')
self.vm.launch()
def tearDown(self):
self.vm.shutdown()
for name in self.images:
os.remove(self.images[name])
def test_blockdev_backup_abort(self):
# Use a two actions transaction with a bogus values on the second
# one to trigger an abort of the transaction.
result = self.vm.qmp('transaction', actions=[
{
'type': 'blockdev-backup',
'data': { 'device': 'drive0',
'target': 'snap0',
'sync': 'full',
'job-id': 'j1' }
},
{
'type': 'blockdev-backup',
'data': { 'device': 'drive1',
'target': 'snap1',
'sync': 'full' }
},
])
# Hangs on failure, we expect this error.
self.assert_qmp(result, 'error/class', 'GenericError')
iotests/281: Test lingering timers Prior to "block/nbd: Delete reconnect delay timer when done" and "block/nbd: Delete open timer when done", both of those timers would remain scheduled even after successfully (re-)connecting to the server, and they would not even be deleted when the BDS is deleted. This test constructs exactly this situation: (1) Configure an @open-timeout, so the open timer is armed, and (2) Configure a @reconnect-delay and trigger a reconnect situation (which succeeds immediately), so the reconnect delay timer is armed. Then we immediately delete the BDS, and sleep for longer than the @open-timeout and @reconnect-delay. Prior to said patches, this caused one (or both) of the timer CBs to access already-freed data. Accessing freed data may or may not crash, so this test can produce false successes, but I do not know how to show the problem in a better or more reliable way. If you run this test on "block/nbd: Assert there are no timers when closed" and without the fix patches mentioned above, you should reliably see an assertion failure. (But all other tests that use the reconnect delay timer (264 and 277) will fail in that configuration, too; as will nbd-reconnect-on-open, which uses the open timer.) Remove this test from the quick group because of the two second sleep this patch introduces. (I decided to put this test case into 281, because the main bug this series addresses is in the interaction of the NBD block driver and I/O threads, which is precisely the scope of 281. The test case for that other bug will also be put into the test class added here. Also, excuse the test class's name, I couldn't come up with anything better. The "yield" part will make sense two patches from now.) Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2022-02-04 14:10:10 +03:00
# Test for RHBZ#2033626
class TestYieldingAndTimers(iotests.QMPTestCase):
sock = os.path.join(iotests.sock_dir, 'nbd.sock')
qsd = None
def setUp(self):
self.create_nbd_export()
# Simple VM with an NBD block device connected to the NBD export
# provided by the QSD
self.vm = iotests.VM()
self.vm.add_blockdev('nbd,node-name=nbd,server.type=unix,' +
f'server.path={self.sock},export=exp,' +
'reconnect-delay=1,open-timeout=1')
self.vm.launch()
def tearDown(self):
self.stop_nbd_export()
self.vm.shutdown()
def test_timers_with_blockdev_del(self):
# The NBD BDS will have had an active open timer, because setUp() gave
# a positive value for @open-timeout. It should be gone once the BDS
# has been opened.
# (But there used to be a bug where it remained active, which will
# become important below.)
# Stop and restart the NBD server, and do some I/O on the client to
# trigger a reconnect and start the reconnect delay timer
self.stop_nbd_export()
self.create_nbd_export()
result = self.vm.qmp('human-monitor-command',
command_line='qemu-io nbd "write 0 512"')
self.assert_qmp(result, 'return', '')
# Reconnect is done, so the reconnect delay timer should be gone.
# (This is similar to how the open timer should be gone after open,
# and similarly there used to be a bug where it was not gone.)
# Delete the BDS to see whether both timers are gone. If they are not,
# they will remain active, fire later, and then access freed data.
# (Or, with "block/nbd: Assert there are no timers when closed"
# applied, the assertions added in that patch will fail.)
result = self.vm.qmp('blockdev-del', node_name='nbd')
self.assert_qmp(result, 'return', {})
# Give the timers some time to fire (both have a timeout of 1 s).
# (Sleeping in an iotest may ring some alarm bells, but note that if
# the timing is off here, the test will just always pass. If we kill
# the VM too early, then we just kill the timers before they can fire,
# thus not see the error, and so the test will pass.)
time.sleep(2)
def create_nbd_export(self):
assert self.qsd is None
# Simple NBD export of a null-co BDS
self.qsd = QemuStorageDaemon(
'--blockdev',
'null-co,node-name=null,read-zeroes=true',
'--nbd-server',
f'addr.type=unix,addr.path={self.sock}',
'--export',
'nbd,id=exp,node-name=null,name=exp,writable=true'
)
def stop_nbd_export(self):
self.qsd.stop()
self.qsd = None
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'],
unsupported_imgopts=['compat'])