The previous commit makes JSON strings containing '%' awkward to
express in templates: you'd have to mask the '%' with an Unicode
escape \u0025. No template currently contains such JSON strings.
Support the printf conversion specification %% in JSON strings as a
convenience anyway, because it's trivially easy to do.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-58-armbru@redhat.com>
The JSON parser optionally supports interpolation. This is used to
build QObjects by parsing string templates. The templates are C
literals, so parse errors (such as invalid interpolation
specifications) are actually programming errors. Consequently, the
functions providing parsing with interpolation
(qobject_from_jsonf_nofail(), qobject_from_vjsonf_nofail(),
qdict_from_jsonf_nofail(), qdict_from_vjsonf_nofail()) pass
&error_abort to the parser.
However, there's another, more dangerous kind of programming error:
since we use va_arg() to get the value to interpolate, behavior is
undefined when the variable argument isn't consistent with the
interpolation specification.
The same problem exists with printf()-like functions, and the solution
is to have the compiler check consistency. This is what
GCC_FMT_ATTR() is about.
To enable this type checking for interpolation as well, we carefully
chose our interpolation specifications to match printf conversion
specifications, and decorate functions parsing templates with
GCC_FMT_ATTR().
Note that this only protects against undefined behavior due to type
errors. It can't protect against use of invalid interpolation
specifications that happen to be valid printf conversion
specifications.
However, there's still a gaping hole in the type checking: GCC
recognizes '%' as start of printf conversion specification anywhere in
the template, but the parser recognizes it only outside JSON strings.
For instance, if someone were to pass a "{ '%s': %d }" template, GCC
would require a char * and an int argument, but the parser would
va_arg() only an int argument, resulting in undefined behavior.
Avoid undefined behavior by catching the programming error at run
time: have the parser recognize and reject '%' in JSON strings.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-57-armbru@redhat.com>
test_after_failed_device_add() does this:
response = qmp("{'execute': 'device_add',"
" 'arguments': {"
" 'driver': 'virtio-blk-%s',"
" 'drive': 'drive0'"
"}}", qvirtio_get_dev_type());
Wrong. An interpolation specification must be a JSON token, it
doesn't work within JSON string tokens. The code above doesn't use
the value of qvirtio_get_dev_type(), and sends arguments
{"driver": "virtio-blk-%s", "drive": "drive0"}}
The command fails because there is no driver named "virtio-blk-%".
Harmless, since the test wants the command to fail. Screwed up in
commit 2f84a92ec6.
Fix the obvious way. The command now fails because the drive is
empty, like it did before commit 2f84a92ec6.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-55-armbru@redhat.com>
The JSON parser has three public headers, json-lexer.h, json-parser.h,
json-streamer.h. They all contain stuff that is of no interest
outside qobject/json-*.c.
Collect the public interface in include/qapi/qmp/json-parser.h, and
everything else in qobject/json-parser-int.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-54-armbru@redhat.com>
The last case where qobject_from_json() & friends return null without
setting an error is empty or blank input. Callers:
* block.c's parse_json_protocol() reports "Could not parse the JSON
options". It's marked as a work-around, because it also covered
actual bugs, but they got fixed in the previous few commits.
* qobject_input_visitor_new_str() reports "JSON parse error". Also
marked as work-around. The recent fixes have made this unreachable,
because it currently gets called only for input starting with '{'.
* check-qjson.c's empty_input() and blank_input() demonstrate the
behavior.
* The other callers are not affected since they only pass input with
exactly one JSON value or, in the case of negative tests, one error.
Fail with "Expecting a JSON value" instead of returning null, and
simplify callers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-48-armbru@redhat.com>
json_message_process_token() accumulates tokens until it got the
sequence of tokens that comprise a single JSON value (it counts curly
braces and square brackets to decide). It feeds those token sequences
to json_parser_parse(). If a non-empty sequence of tokens remains at
the end of the parse, it's silently ignored. check-qjson.c cases
unterminated_array(), unterminated_array_comma(), unterminated_dict(),
unterminated_dict_comma() demonstrate this bug.
Fix as follows. Introduce a JSON_END_OF_INPUT token. When the
streamer receives it, it feeds the accumulated tokens to
json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-46-armbru@redhat.com>
qobject_from_json() & friends use the consume_json() callback to
receive either a value or an error from the parser.
When they are fed a string that contains more than either one JSON
value or one JSON syntax error, consume_json() gets called multiple
times.
When the last call receives a value, qobject_from_json() returns that
value. Any other values are leaked.
When any call receives an error, qobject_from_json() sets the first
error received. Any other errors are thrown away.
When values follow errors, qobject_from_json() returns both a value
and sets an error. That's bad. Impact:
* block.c's parse_json_protocol() ignores and leaks the value. It's
used to to parse pseudo-filenames starting with "json:". The
pseudo-filenames can come from the user or from image meta-data such
as a QCOW2 image's backing file name.
* vl.c's parse_display_qapi() ignores and leaks the error. It's used
to parse the argument of command line option -display.
* vl.c's main() case QEMU_OPTION_blockdev ignores the error and leaves
it in @err. main() will then pass a pointer to a non-null Error *
to net_init_clients(), which is forbidden. It can lead to assertion
failure or other misbehavior.
* check-qjson.c's multiple_values() demonstrates the badness.
* The other callers are not affected since they only pass strings with
exactly one JSON value or, in the case of negative tests, one
error.
The impact on the _nofail() functions is relatively harmless. They
abort when any call receives an error. Else they return the last
value, and leak the others, if any.
Fix consume_json() as follows. On the first call, save value and
error as before. On subsequent calls, if any, don't save them. If
the first call saved a value, the next call, if any, replaces the
value by an "Expecting at most one JSON value" error. Take care not
to leak values or errors that aren't saved.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-44-armbru@redhat.com>
Support for %I64d got added in commit 2c0d4b36e7 "json: fix PRId64 on
Win32". We had to hard-code I64d because we used the lexer's finite
state machine to check interpolations. No more, so clean this up.
Additional conversion specifications would be easy enough to implement
when needed.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-42-armbru@redhat.com>
Both lexer and parser reject invalid interpolation specifications.
The parser's check is useless.
The lexer ends the token right after the first bad character. This
tends to lead to suboptimal error reporting. For instance, input
[ %04d ]
produces the tokens
JSON_LSQUARE [
JSON_ERROR %0
JSON_INTEGER 4
JSON_KEYWORD d
JSON_RSQUARE ]
The parser then yields an error, an object and two more errors:
error: Invalid JSON syntax
object: 4
error: JSON parse error, invalid keyword
error: JSON parse error, expecting value
Dumb down the lexer to accept [A-Za-z0-9]*. The parser's check is now
used. Emit a proper error there.
The lexer now produces
JSON_LSQUARE [
JSON_INTERP %04d
JSON_RSQUARE ]
and the parser reports just
JSON parse error, invalid interpolation '%04d'
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-41-armbru@redhat.com>
The callback to consume JSON values takes QObject *json, Error *err.
If both are null, the callback is supposed to make up an error by
itself. This sucks.
qjson.c's consume_json() neglects to do so, which makes
qobject_from_json() null instead of failing. I consider that a bug.
The culprit is json_message_process_token(): it passes two null
pointers when it runs into a lexical error or a limit violation. Fix
it to pass a proper Error object then. Update the callbacks:
* monitor.c's handle_qmp_command(): the code to make up an error is
now dead, drop it.
* qga/main.c's process_event(): lumps the "both null" case together
with the "not a JSON object" case. The former is now gone. The
error message "Invalid JSON syntax" is misleading for the latter.
Improve it to "Input must be a JSON object".
* qobject/qjson.c's consume_json(): no update; check-qjson
demonstrates qobject_from_json() now sets an error on lexical
errors, but still doesn't on some other errors.
* tests/libqtest.c's qmp_response(): the Error object is now reliable,
so use it to improve the error message.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-40-armbru@redhat.com>
The JSON parser optionally supports interpolation. The lexer
recognizes interpolation tokens unconditionally. The parser rejects
them when interpolation is disabled, in parse_interpolation().
However, it neglects to set an error then, which can make
json_parser_parse() fail without setting an error.
Move the check for unwanted interpolation from the parser's
parse_interpolation() into the lexer's finite state machine. When
interpolation is disabled, '%' is now handled like any other
unexpected character.
The next commit will improve how such lexical errors are handled.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-39-armbru@redhat.com>
The classical way to structure parser and lexer is to have the client
call the parser to get an abstract syntax tree, the parser call the
lexer to get the next token, and the lexer call some function to get
input characters.
Another way to structure them would be to have the client feed
characters to the lexer, the lexer feed tokens to the parser, and the
parser feed abstract syntax trees to some callback provided by the
client. This way is more easily integrated into an event loop that
dispatches input characters as they arrive.
Our JSON parser is kind of between the two. The lexer feeds tokens to
a "streamer" instead of a real parser. The streamer accumulates
tokens until it got the sequence of tokens that comprise a single JSON
value (it counts curly braces and square brackets to decide). It
feeds those token sequences to a callback provided by the client. The
callback passes each token sequence to the parser, and gets back an
abstract syntax tree.
I figure it was done that way to make a straightforward recursive
descent parser possible. "Get next token" becomes "pop the first
token off the token sequence". Drawback: we need to store a complete
token sequence. Each token eats 13 + input characters + malloc
overhead bytes.
Observations:
1. This is not the only way to use recursive descent. If we replaced
"get next token" by a coroutine yield, we could do without a
streamer.
2. The lexer reports errors by passing a JSON_ERROR token to the
streamer. This communicates the offending input characters and
their location, but no more.
3. The streamer reports errors by passing a null token sequence to the
callback. The (already poor) lexical error information is thrown
away.
4. Having the callback receive a token sequence duplicates the code to
convert token sequence to abstract syntax tree in every callback.
5. Known bug: the streamer silently drops incomplete token sequences.
This commit rectifies 4. by lifting the call of the parser from the
callbacks into the streamer. Later commits will address 3. and 5.
The lifting removes a bug from qjson.c's parse_json(): it passed a
pointer to a non-null Error * in certain cases, as demonstrated by
check-qjson.c.
json_parser_parse() is now unused. It's a stupid wrapper around
json_parser_parse_err(). Drop it, and rename json_parser_parse_err()
to json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-35-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-31-armbru@redhat.com>
The JSON parser treats each half of a surrogate pair as unpaired
surrogate. Fix it to recognize surrogate pairs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-30-armbru@redhat.com>
The JSON parser translates invalid \uXXXX to garbage instead of
rejecting it, and swallows \u0000.
Fix by using mod_utf8_encode() instead of flawed wchar_to_utf8().
Valid surrogate pairs are now differently broken: they're rejected
instead of translated to garbage. The next commit will fix them.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-29-armbru@redhat.com>
Since the JSON grammer doesn't accept U+0000 anywhere, this merely
exchanges one kind of parse error for another. It's purely for
consistency with qobject_to_json(), which accepts \xC0\x80 (see commit
e2ec3f9768).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-26-armbru@redhat.com>
We reject bytes that can't occur in valid UTF-8 (\xC0..\xC1,
\xF5..\xFF in the lexer. That's insufficient; there's plenty of
invalid UTF-8 not containing these bytes, as demonstrated by
check-qjson:
* Malformed sequences
- Unexpected continuation bytes
- Missing continuation bytes after start bytes other than
\xC0..\xC1, \xF5..\xFD.
* Overlong sequences with start bytes other than \xC0..\xC1,
\xF5..\xFD.
* Invalid code points
Fixing this in the lexer would be bothersome. Fixing it in the parser
is straightforward, so do that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-23-armbru@redhat.com>
The JSON parser rejects some invalid sequences, but accepts others
without correcting the problem.
We should either reject all invalid sequences, or minimize overlong
sequences and replace all other invalid sequences by a suitable
replacement character. A common choice for replacement is U+FFFD.
I'm going to implement the former. Update the comments in
utf8_string() to expect this.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-22-armbru@redhat.com>
Fix the lexer to reject unescaped control characters in JSON strings,
in accordance with RFC 8259 "The JavaScript Object Notation (JSON)
Data Interchange Format".
Bonus: we now recover more nicely from unclosed strings. E.g.
{"one: 1}\n{"two": 2}
now recovers cleanly after the newline, where before the lexer
remained confused until the next unpaired double quote or lexical
error.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-19-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-17-armbru@redhat.com>
RFC 8259 "The JavaScript Object Notation (JSON) Data Interchange
Format" requires control characters in strings to be escaped.
Demonstrate the JSON parser accepts U+0001 .. U+001F unescaped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-16-armbru@redhat.com>
Some of utf8_string()'s test_cases[] contain multiple invalid
sequences. Testing that qobject_from_json() fails only tests we
reject at least one invalid sequence. That's incomplete.
Additionally test each non-space sequence in isolation.
This demonstrates that the JSON parser accepts invalid sequences
starting with \xC2..\xF4. Add a FIXME comment.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-15-armbru@redhat.com>
The previous commit made utf8_string()'s test_cases[].utf8_in
superfluous: we can use .json_in instead. Except for the case testing
U+0000. \x00 doesn't work in C strings, so it tests \\u0000 instead.
But testing \\uXXXX is escaped_string()'s job. It's covered there.
Test U+0001 here, and drop .utf8_in.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-14-armbru@redhat.com>
utf8_string() tests only double quoted strings. Cover single quoted
strings, too: store the strings to test without quotes, then wrap them
in either kind of quote.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-13-armbru@redhat.com>
simple_string() and single_quote_string() have become redundant with
escaped_string(), except for embedded single and double quotes.
Replace them by a test that covers just that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-12-armbru@redhat.com>
Cover escaped single quote, surrogates, invalid escapes, and
noncharacters. This demonstrates that valid surrogate pairs are
misinterpreted, and invalid surrogates and noncharacters aren't
rejected.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-11-armbru@redhat.com>
Merge a few closely related test strings, and drop a few redundant
ones.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-10-armbru@redhat.com>
escaped_string() first tests double quoted strings, then repeats a few
tests with single quotes. Repeat all of them: store the strings to
test without quotes, and wrap them in either kind of quote for
testing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-9-armbru@redhat.com>
To permit recovering from arbitrary JSON parse errors, the JSON parser
resets itself on lexical errors. We recommend sending a 0xff byte for
that purpose, and test-qga covers this usage since commit 5229564b83.
That commit had to add an ugly hack to qmp_fd_vsend() to make capable
of sending this byte (it's designed to send only valid JSON).
The previous commit added a way to send arbitrary text. Put that to
use for this purpose, and drop the hack from qmp_fd_vsend().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-8-armbru@redhat.com>
qmp-test neglects to cover QMP input that isn't valid JSON. libqtest
doesn't let us send such input. Add qtest_qmp_send_raw() for this
purpose, and put it to use in qmp-test.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-7-armbru@redhat.com>
[Commit message typo fixed]
qmp-test is for QMP protocol tests. Commit e4a426e75e added generic,
basic tests of query commands to it. Move them to their own test
program qmp-cmd-test, to keep qmp-test focused on the protocol.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-6-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-5-armbru@redhat.com>
qobject_from_json() can return null without setting an error on
lexical errors. I call that a bug. Add test coverage to demonstrate
it.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-4-armbru@redhat.com>
qobject_from_json() & friends misbehave when the JSON text has more
than one JSON value. Add test coverage to demonstrate the bugs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-3-armbru@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbfRjcAAoJEPSH7xhYctcj6vkP/0CxdFLEJ5zfmQCT9plrcenc
4CtR3syKimF1CSk9eKPE8V3oiZSBxuM1FJYPhH8d9UbYcWrItGLr/dgh1EAgurAI
P4oeWqI21CGeCWQGduhmQ51vSw1b8JTdNYWmb3QAsMBUugZYla4lvC3R5h63vmBI
4U1RzQrZmmN9svnNMx22dCInbPNoayR3Ekr7z/bF6sRG/B+ZwecenVoD9X8T/Ozu
epx9OOoBfMGDB5wbEEx/RUKrMsGH5D712QeMHUtGYmLRs1Wl4AV7Si+bSd3oi+GI
aL6ZjuaOofGaESOuH7fTkTGhGgmcPd7+pLPqpEYIJ3wmQOOQP/dp9B+6VXCxTQcA
y5F9FBEP5nQL+OIusvi+l65PqzstrKtxrSzWPGHgmosdLead15znZ4Z6YdOtHWHr
ZZOW55M2ZvlZvEWB3hHmT9rjZFP4Uu9XFIW05gzQiqVhcKemtgQ3hBiX+OvxRpPM
RpGLqGK/oDwadvEsNitYqbRJDe74VSAxOtmvEsDfJLzRoyHM1zHw3au00NdyGhMp
89Xc5AnkuHJLCFZ9duXErt5GQz/7EzHkLpQ16pqyuetgLc50ytED1tgqiJl6+Td1
IS4me8wwBpk+IvRD0zsh4p/FHocL684CSP5+AZwcy8RljtbBacbvmAS6484RSx4c
rWL39dB2uhvHEErmS9/+
=2bke
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/juanquintela/tags/check/20180822' into staging
check/next for 20180822
# gpg: Signature made Wed 22 Aug 2018 09:03:40 BST
# gpg: using RSA key F487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>"
# gpg: aka "Juan Quintela <quintela@trasno.org>"
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/juanquintela/tags/check/20180822:
check: Only test tpm devices when they are compiled in
check: Only test usb-ehci when it is compiled in
check: Only test usb-uhci devices when they are compiled in
check: Only test usb-ohci when it is compiled in
check: Only test nvme when it is compiled in
check: Only test pvpanic when it is compiled in
check: Only test wdt_ib700 when it is compiled in
check: Only test sdhci when it is compiled in
check: Only test i82801b11 when it is compiled in
check: Only test ioh3420 when it is compiled in
check: Only test ipack when it is compiled in
check: Only test hda when it is compiled in
check: Only test ac97 when it is compiled in
check: Only test es1370 when it is compiled in
check: Only test rtl8139 when it is compiled in
check: Only test pcnet when it is compiled in
check: Only test eepro100 when it is compiled in
check: Only test ne2000 when it is compiled in
check: Only test vmxnet3 when it is compiled in
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The VM tests currently have a timeout of 2 minutes for trying
to connect to ssh. Since the guest VM has to boot from cold
to the point of accepting inbound ssh during this time, if the
host machine is heavily loaded it can spuriously time out.
Increase the timeout from 2 to 5 minutes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Fam Zheng <famz@redhat.com>
Message-id: 20180823112153.15279-1-peter.maydell@linaro.org
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlt+5OYUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroPtxwf8CQM/F+0L+EKeYfYcVgVZsDhhOkLj
Pm61q0bZsWKLby5jCqIDYw7Z/vodJnSS1DO0slIRoXxvQ9DwlkbBnBy/aG/E9U0q
WF1vbCezibDIt7sGcsu9F5zXU9eqe+E6dZfxFrv8FQSOFVxn34TfeJagWLCtzg0d
LnVTF/e4zJD8IQiM7w6lJQxua3fz13ssPEg2KnMkguDhACMwvZ/K/cA2AJkHRMhY
sroPMwLHlrF1NOoeCIrWxYUmSGCRCAy1DmiPGiiSs0yBq/dL0UkAa5Eu6HMQ7rgI
zUff3JDmzEjixUSIEbpVRN+yPCN0/ACSOpJUrKLDxXbc4nZ+PBQ04YpyPQ==
=UZiV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* x86 TCG fixes for 64-bit call gates (Andrew)
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
# gpg: Signature made Thu 23 Aug 2018 17:46:30 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
KVM: cleanup unnecessary #ifdef KVM_CAP_...
target/i386: update MPX flags when CPL changes
i2c: pm_smbus: Add the ability to force block transfer enable
i2c: pm_smbus: Don't delay host status register busy bit when interrupts are enabled
i2c: pm_smbus: Add interrupt handling
i2c: pm_smbus: Add block transfer capability
i2c: pm_smbus: Make the I2C block read command read-only
i2c: pm_smbus: Fix the semantics of block I2C transfers
i2c: pm_smbus: Clean up some style issues
pc-dimm: assign and verify the "addr" property during pre_plug
pc: drop memory region alignment check for 0
util/oslib-win32: indicate alignment for qemu_anon_ram_alloc()
pc-dimm: assign and verify the "slot" property during pre_plug
ipmi: Use proper struct reference for BT vmstate
vhost-scsi: expose 't10_pi' property for VIRTIO_SCSI_F_T10_PI
vhost-scsi: unify vhost-scsi get_features implementations
vhost-user-scsi: move host_features into VHostSCSICommon
cpus: allow cpu_get_ticks out of BQL
cpus: protect TimerState writes with a spinlock
seqlock: add QemuLockable support
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
So that we can test other implementations.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-8-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of declaring it volatile.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-6-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The check should be unnecessary since commit
e7b3af8159 "glib: bump min required glib
library version to 2.40".
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180730153639.26466-1-marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When used together with -m, this allows us to benchmark the
profiler's performance impact on qemu_mutex_lock.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Certain device introspection crashes used to only happen if you were
using a certain machine, e.g. if the machine was using serial_hd() or
nd_table[], and a device was trying to use these in its instance_init
function, too.
To be able to catch these problems, let's extend the device-introspect
test to check the devices on all machine types, with and without the
"-nodefaults" parameter (since this makes a difference sometimes, too).
Since this is a rather slow operation, and most of the problems are
already handled by testing with the "none" machine only, the test with
all machines is only run in the "make check SPEED=slow" mode.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-8-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introspection should not change the qom-tree / qtree, so we should check
this in the device-introspect-test, too. This patch helped to find lots
of instrospection bugs during the QEMU v3.0 soft/hard-freeze period in the
last two months.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-7-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The tests that check something for all machine types currently spend
a lot of time checking old machine types (like "pc-i440fx-2.0" for
example). The chances that we find something new there in addition
to checking the latest version of a machine type are pretty low, so
we should not waste the time of the developers by testing this again
and again in the "quick" testing mode.
Thus let's add some code to determine whether we are testing a current
machine type or an old one, and only test the old types if we are
running in "SPEED=slow" mode.
This decreases the testing time quite a bit now, e.g. the qom-test
now finishes within 4 seconds for qemu-system-x86_64 instead of 30
seconds when testing all machines.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-6-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When running "make check" on a non-POWER host, the output is quite
distorted like this:
[...]
GTESTER check-qtest-nios2
GTESTER check-qtest-or1k
GTESTER check-qtest-ppc64
Skipping test: kvm_hv not available Skipping test: kvm_hv not available Skipping test: kvm_hv not available Skipping test: kvm_hv not available GTESTER check-qtest-ppcemb
GTESTER check-qtest-ppc
GTESTER check-qtest-riscv32
GTESTER check-qtest-riscv64
[...]
Move the check to the beginning of the main function instead, so that
we do not have to test the condition again and again for each test,
and better use g_test_message() instead of g_print() here, like it is
also done in ufd_version_check() already.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-2-git-send-email-thuth@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because qtest does not support s390 channel I/O, s390 only performs smoke tests on
those few devices that do not have any functional tests. Therefore, every time we
add functional tests for a virtio device, the choice is between removing
those tests from the s390 suite (so that s390 actually _loses_ coverage)
or sprinkling the test with architecture checks.
This patch simply creates a ccw-specific test that only performs smoke tests on
all virtio-ccw devices. If channel I/O support is ever added to qtest and libqos,
then this file can go away. In the meanwhile, it simplifies maintenance and
makes sure that all virtio devices are tested.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The ehci test also test uhci. Welcome to the worderfull world of USB.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
It was not possible to compile out pvpanic. Use the same trick
than applesmc.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
test-file-redirector uses rtl8139 in everything except s390.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
- prep machine is a fictional machine, so has no specifications. Which
devices can be changed/added/removed without impact? Are interrupts
correctly mapped?
- prep firmware (OHW) has support only for IDE drives (no SCSI).
Booting from IDE has been broken approximatively 3 years ago, and nobody complained.
- OHW is limited on IDE boot to a specific set of OS loaders.
These operating systems are of the 2004 time frame.
- OHW can use -kernel. Linux kernel freezes a long time after PS/2 mouse
detection, and then screen becomes garbage. This was already broken in
QEMU v2.7, 2 years ago, and nobody complained.
On the other side:
- 40p is a real machine, so emulation can be checked against
hardware specifications
- OpenBIOS has support for SCSI block devices, including 40p LSI adapter
- OpenBIOS can start mostly all Linux kernels (including recent ones)
and recent operating system (like NetBSD 7.1.2)
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
[dwg: Drop prep from boot-serial test to avoid deprecation warnings]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When we do a build inside one of the BSD VMs, first
delete any stale old build directories from the VM's
/var/tmp. This prevents the VM from running out of
disk space after it has been used for a dozen or
so builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20180820124811.7982-1-peter.maydell@linaro.org
On a SPARC host that I'm using as a build test machine, the
boot-serial-test for the SPARC guest machines takes about 65
seconds to execute. This means that it hits the current
60 second timer on these tests. Push the timeout up so
that it doesn't trigger spuriously on slow hosts like this one.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20180817161404.9420-1-peter.maydell@linaro.org
'test.hex' file is a memory test pattern stored in Hexadecimal Object
Format. It loads at 0x10000 in RAM and contains values from 0 through
255.
The test case verifies that the expected memory test pattern was loaded.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Suggested-by: Steffen Gortz <qemu.ml@steffen-goertz.de>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Su Hang <suhang16@mails.ucas.ac.cn>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMM: changed qtest_startf() to qtest_initf() to work with
current master after the refactoring in commit 88b988c895]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbdTcjAAoJEDhwtADrkYZTZEYP/ivp0ozEfMeGgc6PFItv3zmx
QVD+NYJ8bnv/iEoWl/pnQ0/HY3YLHz4G1DTK0dSlJAvAiChpPiR7YCeJRXeTyLHL
9KCFQV5SV9llstVi0f4ebEK21mUkYWoqtlzxxyqXh0q2N/QLtaVQ85ysE6ufwhNH
jlunmJLGRRwPR95F4a05uVHNOym1ig9eo5CtQ1Fa8viV9BgWTbpSp1t4feB1OLnt
Ml9cbFubb1cA7CuhdNHazNOnRZtEW5A9eOo6rX4d5JcH/zgFWdPpKCRn/X/NdvSE
aRKqk7ll0gxYlacqVpkea23pVKVl7e1oUqkziaL8rq/BYE0SePkRv+SnmsifD8uT
kWl+eHLyaW1g43omc0uttyAuTkFnvAa+l9TqIrdEYcPJJNaCsZVgJpDvj9+Oxril
fk3OIHAnzSWwp/AmFLCSOYdaoVuZhppp6rqnu26B0w9Rxkbqe1790LbjDJrLUB+2
vN7+JmDhUfJk7/2pi+MGZrBtj3zcgbb3Qc5+NG8H1401bA/n8FNnPKgWdmAlmO7i
pTafa1FXArJGWiBhzg2PUqmZq45MQiheQ1+SWgviIodQX5oHB3kEimcRPg4Wk18c
fTKJDe7w8NFFNjuH6ou2LI4KzgQeewW+oCjxh2A7kwCqDmq5Eq8nBw/bYO1DgcDr
bfCnicNJinjCHcgvvCVM
=DuZ8
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-tests-2018-08-16' into staging
Testing patches for 2018-08-16
# gpg: Signature made Thu 16 Aug 2018 09:34:43 BST
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-tests-2018-08-16: (25 commits)
libqtest: Improve error reporting for bad read from QEMU
tests/libqtest: Improve kill_qemu()
libqtest: Rename qtest_FOOv() to qtest_vFOO() for consistency
libqtest: Replace qtest_startf() by qtest_initf()
libqtest: Enable compile-time format string checking
migration-test: Clean up string interpolation into QMP, part 3
migration-test: Clean up string interpolation into QMP, part 2
migration-test: Clean up string interpolation into QMP, part 1
migration-test: Make wait_command() cope with '%'
tests: New helper qtest_qmp_receive_success()
migration-test: Make wait_command() return the "return" member
tests: Clean up string interpolation around qtest_qmp_device_add()
cpu-plug-test: Don't pass integers as strings to device_add
tests: Clean up string interpolation into QMP input (simple cases)
tests: Pass literal format strings directly to qmp_FOO()
qobject: qobject_from_jsonv() is dangerous, hide it away
test-qobject-input-visitor: Avoid format string ambiguity
libqtest: Simplify qmp_fd_vsend() a bit
qobject: New qobject_from_vjsonf_nofail(), qdict_from_vjsonf_nofail()
qobject: Replace qobject_from_jsonf() by qobject_from_jsonf_nofail()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When read() from the qtest socket or the QMP socket fails or EOFs, we
report "Broken pipe" and exit(1). This commonly happens when QEMU
crashes. It also happens when QEMU refuses to run because the test
passed it bad arguments. Sadly, we neglect to report either.
Improve this by calling abort() instead of exit(1), so kill_qemu()
runs, and reports how QEMU died. This improves error reporting to
something like
/x86_64/device/introspect/list: Broken pipe
tests/libqtest.c:129: kill_qemu() detected QEMU death from signal 6 (Aborted) (dumped core)
Three exit() remain in libqtest.c:
* In qmp_response(), when we can't parse a QMP reply read from the QMP
socket. Change to abort() for consistency.
* In qtest_qemu_binary(), when QTEST_QEMU_BINARY isn't in the
environment. This can only happen before we start QEMU. Leave
alone.
* In qtest_init_without_qmp_handshake(), when the fork()ed child fails
to execlp(). Leave alone.
exit() elsewhere are unlikely due to QEMU dying on us. If that should
turn out to be wrong, we can move kill_qemu() from @abrt_hooks to
atexit() or something.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180815141945.10457-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[Commit message tweaked slightly]
In kill_qemu() we have an assert that checks that the QEMU process
didn't dump core:
assert(!WCOREDUMP(wstatus));
Unfortunately the WCOREDUMP macro here means the resulting message
is not very easy to comprehend on at least some systems:
ahci-test: tests/libqtest.c:113: kill_qemu: Assertion `!(((__extension__ (((union { __typeof(wstatus) __in; int __i; }) { .__in = (wstatus) }).__i))) & 0x80)' failed.
and it doesn't identify what signal the process took. What's more,
WCOREDUMP is not reliable - in some cases, setrlimit() coupled with
kernel dump settings can result in the flag not being set. It's
better to log ALL death by signal, instead of caring whether a core
dump was attempted (although once we know a signal happened, also
mentioning if a core dump is present can be helpful).
Furthermore, we are NOT detecting EINTR (while EINTR shouldn't be
happening if we didn't install signal handlers, it's still better
to always be robust).
Finally, even non-signal death with a non-zero status is suspicious,
since qemu's SIGINT handler is supposed to result in exit(0).
Instead of using a raw assert, print the information in an
easier to understand way:
/i386/ahci/sanity: tests/libqtest.c:129: kill_qemu() detected QEMU death from signal 11 (Segmentation fault) (core dumped)
(Of course, the really useful information would be why the QEMU
process dumped core in the first place, but we don't have that
by the time the test program has picked up the exit status.)
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180810132800.38549-1-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Core dump reporting and commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13 of 13 C99 library function pairs taking ... or a va_list parameter
are called FOO() and vFOO(). In QEMU, we sometimes call the one
taking a va_list FOOv() instead. Bad taste. libqtest.h uses both
spellings. Normalize it to the standard spelling.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-24-armbru@redhat.com>
qtest_init() creates a new QTestState, and leaves @global_qtest alone.
qtest_start() additionally assigns it to @global_qtest, but
qtest_startf() additionally assigns NULL to @global_qtest. This makes
no sense. Replace it by qtest_initf() that works like qtest_init(),
i.e. leaves @global_qtest alone.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-23-armbru@redhat.com>
qtest_qmp() & friends pass their format string and variable arguments
to qobject_from_vjsonf_nofail(). Unlike qobject_from_jsonv(), they
aren't decorated with GCC_FMT_ATTR(). Fix that to get compile-time
format string checking.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-22-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the recent commit "tests: Clean up
string interpolation into QMP input (simple cases)".
migration-test.c interpolates strings into JSON in a few places:
* migrate_set_parameter() interpolates string parameter @value as a
JSON number. Change it to long long. This requires changing
migrate_check_parameter() similarly.
* migrate_set_capability() interpolates string parameter @value as a
JSON boolean. Change it to bool.
* deprecated_set_speed() interpolates string parameter @value as a
JSON number. Change it to long long.
Bonus: gets rid of non-literal format strings. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-21-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the recent commit "tests: Clean up
string interpolation into QMP input (simple cases)".
migrate() interpolates members into a JSON object. Change it to take
its extra QMP arguments as arguments for qdict_from_jsonf_nofail()
instead of a string containing JSON members.
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-20-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the recent commit "tests: Clean up
string interpolation into QMP input (simple cases)".
migrate_recover() builds QMP input manually because wait_command()
can't interpolate. Well, it can since the previous commit. Simplify
accordingly.
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-19-armbru@redhat.com>
wait_command() passes its argument @command to qtest_qmp_send().
Falls apart if @command contain '%'. Two ways to disarm this trap:
suppress interpretation of '%' by passing @command as argument to
format string "%s", or fix it by having wait_command() take the
variable arguments to go with @command. Do the latter.
This is another step towards compile-time format string checking
without triggering -Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-18-armbru@redhat.com>
Commit b21373d071 copied wait_command() from tests/migration-test.c
to tests/tpm-util.c. Replace both copies by new libqtest helper
qtest_qmp_receive_success(). Also use it to simplify
qtest_qmp_device_del().
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-17-armbru@redhat.com>
All callers of wait_command() are only interested in the success
response's "return" member. Lift its extraction into wait_command().
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-16-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the commit before previous.
qtest_qmp_device_add() and its wrappers interpolate into JSON as
follows:
* qtest_qmp_device_add() interpolates members into a JSON object.
* So do its wrappers qpci_plug_device_test() and usb_test_hotplug().
* usb_test_hotplug() additionally interpolates strings and numbers
into JSON strings.
Clean them up:
* Have qtest_qmp_device_add() take its extra device properties as
arguments for qdict_from_jsonf_nofail() instead of a string
containing JSON members.
* Drop qpci_plug_device_test(), use qtest_qmp_device_add()
directly.
* Change usb_test_hotplug() parameter @port to string, to avoid
interpolation. Interpolate @hcd_id separately.
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-15-armbru@redhat.com>
test_plug_with_device_add_x86() plugs Haswell-i386-cpu and
Haswell-x86_64-cpu with device_add. It passes socket-id, core-id,
thread-id as JSON strings. The properties are actually integers.
test_plug_with_device_add_coreid() plugs power8_v2.0-spapr-cpu-core
and qemu-s390x-cpu with device_add. It passes core-id as JSON string.
The properties are actually integers.
Passing JSON string values to integer properties works only due to
device_add implementation accidents. Fix the test to pass JSON
numbers. While there, use %u rather than %i with unsigned int.
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-14-armbru@redhat.com>
When you build QMP input manually like this
cmd = g_strdup_printf("{ 'execute': 'migrate',"
"'arguments': { 'uri': '%s' } }",
uri);
rsp = qmp(cmd);
g_free(cmd);
you're responsible for escaping the interpolated values for JSON. Not
done here, and therefore works only for sufficiently nice @uri. For
instance, if @uri contained a single "'", qobject_from_vjsonf_nofail()
would abort. A sufficiently nasty @uri could even inject unwanted
members into the arguments object.
Leaving interpolation into JSON to qmp() is more robust:
rsp = qmp("{ 'execute': 'migrate', 'arguments': { 'uri': %s } }", uri);
It's also more concise.
Clean up the simple cases where we interpolate exactly a JSON value.
Bonus: gets rid of non-literal format strings. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-13-armbru@redhat.com>
The qmp_FOO() take a printf-like format string. In a few places, we
assign a string literal to a variable and pass that instead of simply
passing the literal. Clean that up.
Bonus: gets rid of non-literal format strings. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-12-armbru@redhat.com>
When visitor_input_test_init_internal()'s argument @ap is null, then
@json_string is interpreted literally, else it's gets %-escapes
interpolated. This is awkward.
One caller always passes null @ap, and the others never do. Lift the
building of the QObject into the callers, where it can be done without
such ambiguity.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-10-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-9-armbru@redhat.com>
Commit ab45015a96 "qobject: Let qobject_from_jsonf() fail instead of
abort" fails to accomplish its stated aim: the function can still
abort due to its use of &error_abort.
Its rationale for letting it fail is that all remaining users cope
fine with failure. Well, they're just fine with aborting, too; it's
what they do on failure.
Simply reverting the broken commit would bring back the unfortunate
asymmetry between qobject_from_jsonf() and qobject_from_jsonv(): one
aborts, the other returns null. So also rename it to
qobject_from_jsonf_nofail().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-7-armbru@redhat.com>
We have two flavors of vararg usage in qtest: qtest_hmp() etc. work
like sprintf(), and qtest_qmp() etc. work like qobject_from_jsonf().
Spell that out in the comments.
Also add GCC_FMT_ATTR() to qtest_hmp() etc. so that the compiler can
flag incorrect use.
We have some cleanup work to do before we can do the same for
qtest_qmp() etc. This would get us the same better-than-nothing
checking we already have for qobject_from_jsonf(): common incorrect
uses of supported conversion specifications will be flagged
(e.g. passing a double for %d), but use of unsupported ones won't.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Rebased, comment wording tweaked, commit message rewritten]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180806065344.7103-6-armbru@redhat.com>
qtest_qmp_discard_response(...) is shorthand for
qobject_unref(qtest_qmp(...), except it's not actually shorter.
Moreover, the presence of these functions encourage sloppy testing.
Remove them from libqtest. Add them as macros to the tests that use
them, with a TODO comment asking for cleanup.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-5-armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
qtest_init() still uses the qtest_qmp_discard_response(s, "") hack to
receive the greeting, even though we have qtest_qmp_receive() since
commit 66e0c7b187. Put it to use.
Bonus: gets rid of an empty format string. A step towards
compile-time format string checking without triggering
-Wformat-zero-length.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-4-armbru@redhat.com>
qtest_qmp_device_del() still uses the qmp("") hack to receive a
message, even though we have qmp_receive() since commit 66e0c7b187.
Put it to use.
Bonus: gets rid of empty format strings. A step towards compile-time
format string checking without triggering -Wformat-zero-length.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-3-armbru@redhat.com>
The functions to receive messages are called qtest_qmp_receive() and
qmp_receive(), qmp_fd_receive(). The ones to send messages are called
qtest_async_qmp(), qtest_async_qmpv(), qmp_async(), qmp_fd_send(),
qmp_fd_sendv(). Inconsistent. Rename the *_async* ones to
qmp_send(), qtest_qmp_send(), qtest_qmp_vsend(). Rename
qmp_fd_sendv() to qmp_fd_vsend().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-2-armbru@redhat.com>
blockdev-mirror with the same node for source and target segfaults
today: A node is in its own backing chain, so mirror_start_job() decides
that this is an active commit. When adding the intermediate nodes with
block_job_add_bdrv(), it starts the iteration through the subchain with
the backing file of source, though, so it never reaches target and
instead runs into NULL at the base.
While we could fix that by starting with source itself, there is no
point in allowing mirroring a node into itself and I wouldn't be
surprised if this caused more problems later.
So just check for this scenario and error out.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This reinstates commit b008326744,
which was temporarily reverted for the 3.0 release so that libvirt gets
some extra time to update their command lines.
The -drive option serial was deprecated in QEMU 2.10. It's time to
remove it.
Tests need to be updated to set the serial number with -global instead
of using the -drive option.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This reinstates commit a7aff6dd10,
which was temporarily reverted for the 3.0 release so that libvirt gets
some extra time to update their command lines.
The -drive options cyls, heads, secs and trans were deprecated in
QEMU 2.10. It's time to remove them.
hd-geo-test tested both the old version with geometry options in -drive
and the new one with -device. Therefore the code using -drive doesn't
have to be replaced there, we just need to remove the -drive test cases.
This in turn allows some simplification of the code.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
The previous patch fixes a problem in which draining a block device
with more than one throttled request can make it wait first for the
completion of requests in other members of the same group.
This patch updates test_remove_group_member() in iotest 093 to
reproduce that scenario. This updated test would hang QEMU without the
fix from the previous patch.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A throttle group can have several members, and each one of them can
have several pending requests in the queue.
The requests are processed in a round-robin fashion, so the algorithm
decides the drive that is going to run the next request and sets a
timer in it. Once the timer fires and the throttled request is run
then the next drive from the group is selected and a new timer is set.
If the user tried to remove a drive from a group and that drive had a
timer set then the code was not taking care of setting up a new timer
in one of the remaining members of the group, freezing their I/O.
This problem was fixed in 6fccbb475b,
and this patch adds a new test case that reproduces this exact
scenario.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fix the following issues:
common.py:873:13: E129 visually indented line with same indent as next logical line
common.py:1766:5: E741 ambiguous variable name 'l'
common.py:1784:1: E305 expected 2 blank lines after class or function definition, found 1
common.py:1833:1: E305 expected 2 blank lines after class or function definition, found 1
common.py:1843:1: E305 expected 2 blank lines after class or function definition, found 1
visit.py:181:18: E127 continuation line over-indented for visual indent
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180621083551.775-1-armbru@redhat.com>
[Fixup squashed in:]
Message-ID: <871sd0nzw9.fsf@dusky.pond.sub.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Presumably 0.15 was the version it was first introduced, but
qmp keeps evolving. There is no point in having that version
as test prefix, 'qmp' makes more sense here.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180326150916.9602-12-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Use make's --output-sync option when running tests inside VMs,
so that if we're building with parallelization the output doesn't
get scrambled.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-6-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Currently we run the guests in a VM which is given only 2G of RAM.
Since the guests are configured without any swap space, builds
can fail because the system runs out of memory and kills the
compiler, especially if the job count is set for a lot of
parallelism. Bump the setting up from 2G to 4G to give us some
more headroom.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-5-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Invoking 'make vm-build-freebsd' and friends with V=1 should
propagate that verbosity setting down into the build run
inside the VM. Make sure we do that. This brings it into
line with how the container tests handle V=1.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-4-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Our test suite works for parallel execution too, and this can
noticeably speed up a test run; pass the 'jobs' setting to
it as well as to the build proper.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-3-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
The images are big. Add a rule to clean up easily.
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180716020008.31468-1-famz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
This one does docker testing in the VM. It is intended to replace the
native docker testing on patchew testers.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180712012829.20231-5-famz@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
In VM based tests, the source archive is created in host, we don't have
to run archive-source.sh again, as it complicates the Makefile and
scripts.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180712012829.20231-4-famz@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Not using snapshot has the benefit of automatically persisting useful
test harnesses, such as docker images and ccache database. Although it
will lose some cleanness, it is imaginably useful for patchew.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180712012829.20231-2-famz@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Similar to 79f24568e5, this fixes the following warnings:
CHK version_gen.h
LEX convert-dtsv0-lexer.lex.c
make[1]: flex: Command not found
BISON dtc-parser.tab.c
make[1]: bison: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180628153535.1411-5-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
If KVM is not available, then use the 'max' cpu.
This fixes:
ERROR:root:Log:
ERROR:root:qemu-system-x86_64: CPU model 'host' requires KVM
Failed to prepare guest environment
error: [Errno 104] Connection reset by peer
source/qemu/tests/vm/Makefile.include:25: recipe for target 'tests/vm/ubuntu.i386.img' failed
make: *** [tests/vm/ubuntu.i386.img] Error 2
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180628153535.1411-4-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Now, QEmu adds a new check for memory-less NUMA nodes in build_srat().
It effects the ACPI test.
So, Update ACPI tables test blobs.
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This adds a test to make sure we fail properly for a 0 length mmap.
There are most likely other failure conditions we should also check.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: umarcor <1783362@bugs.launchpad.net>
Message-Id: <20180730134321.19898-3-alex.bennee@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Make sure that query-blockstats returns information for every
BlockBackend that is named or attached to a device model (or both).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On my system (Fedora 28), this script reports a 'failed to get
"consistent read" lock' error. Following docs/devel/testing.rst, it's
better to add locking=off here.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qstring_from_substr() takes the index of the substring's first and
last character. qstring_from_substr(s, 0, SIZE_MAX) denotes an empty
substring. Awkward.
Shift the end index one to the right. This simplifies both
qstring_from_substr() and its callers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180727062204.10401-3-armbru@redhat.com>
When gnutls negotiates TLS 1.3 instead of 1.2, the order of messages
sent by the handshake changes. This exposed a logic bug in the test
suite which caused us to wait for the server to see handshake
completion, but not wait for the client to see completion. The result
was the client didn't receive the certificate for verification and the
test failed.
This is exposed in Fedora 29 rawhide which has just enabled TLS 1.3 in
its GNUTLS builds.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Most of the TLS related tests are passing an in a "Error" object to
methods that are expected to fail, but then ignoring any error that is
set and instead asserting on a return value. This means that when an
error is unexpectedly raised, no information about it is printed out,
making failures hard to diagnose. Changing these tests to pass in
&error_abort will make unexpected failures print messages to stderr.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The test-vmstate test is a bit chatty because it triggers various
expected failure scenarios and the code in question uses error_report
instead of accepting 'Error **errp' parameters. To silence this test the
stubs for error_vprintf() were changed to send errors via
g_test_message() instead of stderr:
commit 28017e010d
Author: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon Oct 24 18:31:03 2016 +0200
tests: send error_report to test log
Implement error_vprintf to send the output of error_report to
the test log. This silences test-vmstate.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477326663-67817-3-git-send-email-pbonzini@redhat.com>
Unfortunately this change has global impact across the entire test suite
and means that when tests fail for unexpected reasons, the message is
not displayed on stderr. eg when using &error_abort in a call the test
merely prints
Unexpected error in qcrypto_tls_session_check_certificate() at crypto/tlssession.c:280:
and the actual error message is hidden, making it impossible to diagnose
the failure. This is especially problematic in CI or build systems where
it isn't possible to easily pass the --debug-log flag to tests and
re-run with the test log visible.
This change makes the previous big hammer much more nuanced, providing a
flag in the stub error_vprintf() that can used on a per-test basis to
silence the errors. Only the test-vmstate silences errors initially.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Calling qcrypto_init ensures that all relevant initialization is
done. In particular this honours the debugging settings and thread
settings.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The only possible change of last_byte is when it reaches the edge.
Setting it every time might let last_byte contain an invalid data when
memory corruption is detected, then the check of the next byte will be
incorrect. For example, a single page corruption at address 0x14ad000
will also lead to a "fake" corruption at 0x14ae000:
Memory content inconsistency at 14ad000 first_byte = 44 last_byte = 44 current = ef hit_edge = 0
Memory content inconsistency at 14ae000 first_byte = 44 last_byte = ef current = 44 hit_edge = 0
After the patch, it'll only report the corrputed page:
Memory content inconsistency at 14ad000 first_byte = 44 last_byte = 44 current = ef hit_edge = 0
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180723123305.24792-4-peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The combination of being rather esoteric and needing to support mmap @
0 means this only ever worked under translation. It has now regressed
even further and is no longer useful. Kill it.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Setting up binfmt_misc is outside of the scope of the docker.py script
but we can at least validate it with any given executable so we have a
more useful error message than the sed line of deboostrap failing
cryptically.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Richard Henderson <richard.henderson@linaro.org>
We do a minimum version check for the debootstrap but if the distro
has added their own minor version tick it would fail and fall-back to
the SCM version. This is sub-optimal as the latest/greatest version
may be broken at any one particular time. We fix that with a little
sed magic on the version string before passing to our ugly shell
versioning check.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This is just a note that later versions of debootstrap don't
technically need this hack.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
When a check fails we currently just report why we failed. This is not
totally helpful to people who want to boot-strap a new image. Report a
hint as to why it failed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Suggested-by: Fam Zheng <famz@redhat.com>
The addition of QEMU_TARGET was intended to ensure we fall back to
checking for the existence of an image if the build system was not
currently configured to build it. However this breaks the direct use
of the rule for building custom binfmt_misc images. We already check
for EXECUTABLE so let us just use that as a proxy for deciding if we
are just going to check the image exits.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This allows us to run a particular test on all docker images. For
example:
make docker-test-unit
Will run the unit tests on every supported image. At the same time
rename docker-test to docker-all-tests to be clearer.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This test doesn't even build QEMU, it just builds and runs all the
unit tests. Intended to make checking unit tests on all docker images
easier.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Rename DOCKER_INTERMEDIATE_IMAGES to DOCKER_PARTIAL_IMAGES and add the
incomplete cross compiler images that can build tests but can't build
QEMU itself. We also add debian, debian-bootstrap and the tricode
images to the list.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Not all our images are able to run the tests. Rather than use features
we can just check for the existence and run-ability of gtester. If the
image has been setup for binfmt_misc it will be able to run anyway.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Not all docker images can run the check step. Let's move everything
into a common helper so we don't need to replicate checks in the
future.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This allows some tests that just want to configure QEMU's source tree
to do so.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
As this is called directly from the Makefile while determining
dependencies and it is possible the user was configured in one window
but not have credentials in the other. Let's catch the Exceptions and
deal with it quietly.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This image isn't going to build anything significant as it is just
intended for building test cases. In case it does end up getting
inadvertently included in a build lets aim for the minimal possible
product.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We need both git and a working compiler to build the tools. Although
the qemu:debian9 image also has a bunch of extra dependencies it would
be fairly unusual for a user not to already have this layer available
for one of our many other docker images so lets not complicate things.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The .gitignore was being a little over enthusiastic hiding files.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
223 tests persistent dirty bitmaps which are not supported in
compat=0.10, so that option is unsupported for this test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The test directory should be filtered before the image format, otherwise
the test will fail if the image format is part of the test directory,
like so:
[...]
-can't open: Could not open 'TEST_DIR/t.IMGFMT': Is a directory
+can't open: Could not open '/tmp/test-IMGFMT/t.IMGFMT': Is a directory
[...]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This test doesn't actually care about the format anyway, it just
supports "all formats" as a convenience. LUKS however does not use a
simple image filename which confuses this iotest.
We can simply skip the test for formats that use IMGOPTSSYNTAX for
their filenames without missing much coverage.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The test case uses block devices with driver=file, which causes the test
to fail after commit 230ff73904 added a deprecation warning for this.
Fix the test case to use driver=host_device and update the reference
output accordingly.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
@cur_mon is null unless the main thread is running monitor code, either
HMP code within monitor_read(), or QMP code within
monitor_qmp_dispatch().
Use of @cur_mon outside the main thread is therefore unsafe.
Most of its uses are in monitor command handlers. These run in the main
thread.
However, there are also uses hiding elsewhere, such as in
error_vprintf(), and thus error_report(), making these functions unsafe
outside the main thread. No such unsafe uses are known at this time.
Regardless, this is an unnecessary trap. It's an ancient trap, though.
More recently, commit cf869d5317 "qmp: support out-of-band (oob)
execution" spiced things up: the monitor I/O thread assigns to @cur_mon
when executing commands out-of-band. Having two threads save, set and
restore @cur_mon without synchronization is definitely unsafe. We can
end up with @cur_mon null while the main thread runs monitor code, or
non-null while it runs non-monitor code.
We could fix this by making the I/O thread not mess with @cur_mon, but
that would leave the trap armed and ready.
Instead, make @cur_mon thread-local. It's now reliably null unless the
thread is running monitor code.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[peterx: update subject and commit message written by Markus]
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180720033451.32710-1-peterx@redhat.com>
server->bus in _test_server_free() could be NULL, since TestServer
*dest in test_migrate() was not properly initialized like TestServer *s.
Added init_virtio_dev(dest) and uninit_virtio_dev(dest), so the fields
are properly set and when test_server_free(dest); is called, they can
be correctly freed.
The reason for that is init_virtio_dev() calls qpci_init_pc(), that
creates a QPCIBusPC * (returned as QPCIBus *), while test_server_free()
calls qpci_free_pc(), that frees the QPCIBus *. Not calling
init_virtio_dev() would leave the QPCIBus * of TestServer unset.
Problem came out once I modified pci-pc.c and pci-pc.h, modifying
QPCIBusPC by adding another field before QPCIBus bus. Re-running the
tests showed vhost-user-test failing.
Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
Message-Id: <1530022733-29581-1-git-send-email-esposem@usi.ch>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Committing to the current --preconfig / exit-preconfig interface
before it has seen any use is premature. Mark both as experimental,
the former in documentation, the latter by renaming it to
x-exit-preconfig.
See the previous commit for more detailed rationale.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180705091402.26244-3-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
[Straightforward conflict with commit 514337c142 resolved]
We currently don't enforce that the sparse segments we detect during convert are
aligned. This leads to unnecessary and costly read-modify-write cycles either
internally in Qemu or in the background on the storage device as nearly all
modern filesystems or hardware have a 4k alignment internally.
This patch modifies is_allocated_sectors so that its *pnum result will always
end at an alignment boundary. This way all requests will end at an alignment
boundary. The start of all requests will also be aligned as long as the results
of get_block_status do not lead to an unaligned offset.
The number of RMW cycles when converting an example image [1] to a raw device that
has 4k sector size is about 4600 4k read requests to perform a total of about 15000
write requests. With this path the additional 4600 read requests are eliminated while
the number of total write requests stays constant.
[1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Test that we're rejecting what we ought to for file,
host_driver and host_cdrom drivers. Test that we're
seeing the deprecated message for block and chardevs
on the file driver.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
197 is one example where _make_test_img is used twice without stopping
the NBD server in between. An error will occur like this:
@@ -26,9 +26,13 @@
=== Partial final cluster ===
+qemu-img: TEST_DIR/t.IMGFMT: Failed to get "resize" lock
+Is another process using the image?
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+Failed to find an available port: Address already in use
read 1024/1024 bytes at offset 0
Patch _make_test_img to stop the old qemu-nbd before starting a new one,
which fixes this problem, and similarly 215.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This step was left behind my mistake. As suggested by the echoed text,
the intention was to test two devices with the same image, with
different options. The behavior should be the same as two QEMU
processes. Complete it.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We dumped something when network failure happens. We should avoid those
messages to be dumped when running the tests:
$ ./tests/migration-test -p /x86_64/migration/postcopy/recovery
/x86_64/migration/postcopy/recovery: qemu-system-x86_64: check_section_footer: Read section footer failed: -5
qemu-system-x86_64: Detected IO failure for postcopy. Migration paused.
qemu-system-x86_64: Detected IO failure for postcopy. Migration paused.
OK
After the patch:
$ ./tests/migration-test -p /x86_64/migration/postcopy/recovery
/x86_64/migration/postcopy/recovery: OK
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-11-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Test the postcopy recovery procedure by emulating a network failure
using migrate-pause command.
Tested-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-10-peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It's generalized from wait_for_migration_complete() to allow us to wait
for any migration status besides failure.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-9-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Introduce helpers to query migration states and use it.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-8-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
For example, we can pass in '"resume": true' to resume a migration.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-7-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Separate the old postcopy UNIX socket test into three steps, provide a
helper for each step. With these helpers, we can do more compliated
tests like postcopy recovery, while keep the codes shared.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-6-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fix up merge with 2e295789 / Skip tests for ppc tcg
This reverts commit a7aff6dd10.
Hold off removing this for one more QEMU release (current libvirt
release still uses it.)
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This reverts commit b008326744.
Hold off removing this for one more QEMU release (current libvirt
release still uses it.)
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Luks needs special parameters to operate the image. Since this test is
focusing on image fleecing, skip skip that format.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If the virtual disk size isn't aligned to full clusters,
bdrv_co_do_copy_on_readv() may get pnum == 0 before having the full
cluster completed, which will let it run into an assertion failure:
qemu-io: block/io.c:1203: bdrv_co_do_copy_on_readv: Assertion `skip_bytes < pnum' failed.
Check for EOF, assert that we read at least as much as the read request
originally wanted to have (which is true at EOF because otherwise
bdrv_check_byte_request() would already have returned an error) and
return success early even though we couldn't copy the full cluster.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This new test verifies that VMDK backing file reads fail when the
backing file has a non-matching CID. This includes non-VMDK backing
files.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180702210721.4847-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The CMSDK timer behaviour is that an interrupt is triggered when the
counter counts down from 1 to 0; however one is not triggered if the
counter is manually set to 0 by a guest write to the counter register.
Currently ptimer can't handle this; add a policy option to allow
a ptimer user to request this behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Message-id: 20180703171044.9503-2-peter.maydell@linaro.org
PPC tcg seems to be failing migration tests quite regularly;
we believe this is TCG bugs in dirty bit updating; it's
not clear why PPC fails more but lets skip for the moment.
$ ./tests/migration-test
/ppc64/migration/deprecated: OK
/ppc64/migration/bad_dest: Skipping test: kvm_hv not available OK
/ppc64/migration/postcopy/unix: Skipping test: kvm_hv not available OK
/ppc64/migration/precopy/unix: Skipping test: kvm_hv not available OK
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-id: 20180706143105.93472-1-dgilbert@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can't use cross compilers in the current Debian stable and Debian
sid is sketchy as hell. So for powerpc fall back to dog-fooding our
own linux-user to do the build.
As we can only build the base image with a suitably configured
source tree we fall back to checking for its existence when we can't
build it from scratch. However this does mean you don't have to keep
a static powerpc-linux-user in your active configuration just to
update the cross build image.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We might as well have a custom rule for this. For one thing the
dependencies are different. As the primary dependency for
docker-image-% could never be docker-image-debian-bootstrap we can
drop that test in the main rule as well.
Missing EXECUTABLE, DEB_ARCH and DEB_TYPE are treated as hard faults
now. We also error out if the EXECUTABLE file isn't there. We should
really do this with a dependency on any source rules but currently
subdir-FOO-linux-user isn't enough on a clean build.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
These will have been build with debootstrap so we need to check
against the debian-bootstrap dockerfile. This does mean sticking to
debian-FOO-user as the naming conventions for boot-strapped images.
The actual cross image is built on top.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We default to the buildd variant as most of our images are for
building. However lets give the user the ability to specify "minbase"
if they want to create a simple base image for experimentation.
Allowing the tweaking of DEB_URL means we can also bootstrap other
Debian based OS's. For example:
make docker-binfmt-image-debian-ubuntu-bionic-arm64 \
DEB_ARCH=arm64 DEB_TYPE=bionic \
DEB_VARIANT=minbase DEB_URL=http://ports.ubuntu.com/ \
EXECUTABLE=./aarch64-linux-user/qemu-aarch64
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This is best done with any child images that actually need it.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We can still build the DOCKER_INTERMEDIATE_IMAGES images,
but they won't appear in 'make test*@$IMAGE'.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Using the duplicated same package is confusing.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Do not test the deprecated API versions (see cabd358407).
Debian MXE MinGW cross images are already using SDL2.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Since docker caches the different layers, updating the package
list does not invalidate the previous "apt-get update" layer,
and it is likely "apt-get install" hits an outdated repository.
See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#apt-get
This fixes:
$ make docker-image-ubuntu V=1
./tests/docker/docker.py build qemu:ubuntu tests/docker/dockerfiles/ubuntu.docker --add-current-user
Sending build context to Docker daemon 3.072kB
[...]
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/m/mesa/libgles2-mesa_17.0.7-0ubuntu0.16.04.2_amd64.deb 404 Not Found
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/m/mesa/libgles2-mesa-dev_17.0.7-0ubuntu0.16.04.2_amd64.deb 404 Not Found
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
The command '/bin/sh -c apt-get -y install $PACKAGES' returned a non-zero code: 100
tests/docker/Makefile.include:40: recipe for target 'docker-image-ubuntu' failed
make: *** [docker-image-ubuntu] Error 1
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Useful for debugging if nothing else as the gcovr on the Travis images
are a little old.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
I'm not entirely sure who's using this information and certainly in a
CI environment it just washes over as additional noise. Later patches
will provide new reporting options so a user who wants to analyse
individual tests will be able to use that to get the information.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit 208ecb3e1a. This was
causing problems by making DEF_TARGET_LIST pointless and having to
jump through hoops to build on mingw with a dully enabled config.
This includes a change to fix the per-guest TCG test probe which was
added after 208ecb3 and used TARGET_LIST.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
- qcow2: Use worker threads for compression to improve performance of
'qemu-img convert -W' and compressed backup jobs
- blklogwrites: New filter driver to log write requests to an image in
the dm-log-writes format
- file-posix: Fix image locking during image creation
- crypto: Fix memory leak in error path
- Error out instead of silently truncating node names
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbPfHhAAoJEH8JsnLIjy/Wxp8P/jeEJ/jJ0N2doix5KJSvtkdP
dYx4cJxHg9tMSnmBfA49qc1ilgbzZqjbF/wH3Ko3zPsmFP0xsxx+9zE4c3iL7HA6
6E4/r3szznO8DIxdgzAIeHjZc7YPpsY3alrthT55eR/FnDyc8zofi/iUMgvTKPYA
8TnWqgi2fdaVwNy+lIRJFdhzzQNtPxOEO+0DFHaOZvQ9vlc5DGPC+Hj3Qc11GK8M
u7orkeYqQPknxy0hJKPxtWHvzCQUJMcSSs6PbuslnzOerYiQLmx+RVIwMhXfVZjV
lXe2SppAszBujtcIENhZlj1cECs7MrWTmFDcWvBA+Mh/JhFEWwykmlQHYrjCqdvw
QyWhVLgP/6jQDaBls6LADSZDxX7i0sV27DzVGN4gYUX/KcyVPEDROm90MyE8nxN0
Y5hhujTkzu94Zvd/OlKZRs4tPxmbRrd2SWy0id8Kj5/gO/+UWECOZCrPrlB5uzec
bsxHoeXLVe2/56JkCIiOw3hLILMY6gPLJgeaQjz6hu1oZJqYuoof8grVFjUCSH0C
BBlz8O4upHdPIKNRl4rmdgwasg5YIiJ7SFl4h7KMCD0elkKmo3SyTKosagvqrpVN
JvW1YuosCedcfYIKNPnZdEGWDKuTaZtcnZnM9IRY6hr8ySMw1LloFsgXsJ3tqTwe
wb1Wjm4Qlz5BaoB7o/Os
=z5AC
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- qcow2: Use worker threads for compression to improve performance of
'qemu-img convert -W' and compressed backup jobs
- blklogwrites: New filter driver to log write requests to an image in
the dm-log-writes format
- file-posix: Fix image locking during image creation
- crypto: Fix memory leak in error path
- Error out instead of silently truncating node names
# gpg: Signature made Thu 05 Jul 2018 11:24:33 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
file-posix: Unlock FD after creation
file-posix: Fix creation locking
block/blklogwrites: Add an option for the update interval of the log superblock
block/blklogwrites: Add an option for appending to an old log
block/blklogwrites: Change log_sector_size from int64_t to uint64_t
block/crypto: Fix memory leak in create error path
block: Don't silently truncate node names
block: Add blklogwrites
block: Move two block permission constants to the relevant enum
qcow2: add compress threads
qcow2: refactor data compression
qemu-img: allow compressed not-in-order writes
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the user passes a too long node name string, we silently truncate it
to fit into BlockDriverState.node_name, i.e. to 31 characters. Apart
from surprising the user when the node has a different name than
requested, this also bypasses the check for duplicate names, so that the
same name can be assigned to multiple nodes.
Fix this by just making too long node names an error.
Reported-by: Peter Krempa <pkrempa@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
By using the more specific type, we get fewer downcasts. The
downcasts are safe, but not obviously so, at least not locally.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-24-armbru@redhat.com>
handle_qmp_command() reports certain errors right away. This is wrong
when OOB is enabled, because the errors can "jump the queue" then, as
the previous commit demonstrates.
To fix, we need to delay errors until dispatch. Do that for semantic
errors, mostly by reverting ill-advised parts of commit cf869d5317
"qmp: support out-of-band (oob) execution". Bonus: doesn't run
qmp_dispatch_check_obj() twice, once in handle_qmp_command(), and
again in do_qmp_dispatch(). That's also due to commit cf869d5317.
The next commit will fix queue jumping for syntax errors.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-18-armbru@redhat.com>
When OOB is enabled, out-of-band commands are executed right away,
everything else is queued. This lets out-of-band commands "jump the
queue".
However, certain errors are always reported right away, and therefore
can jump the queue even when the erroneous input does not request
out-of-band execution. These errors are pretty unlikely to occur in
production, but it's wrong all the same. Mark FIXME.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180703085358.13941-17-armbru@redhat.com>
Commit cf869d5317 "qmp: support out-of-band (oob) execution" added a
general mechanism for command-independent arguments just for an
out-of-band flag:
The "control" key is introduced to store this extra flag. "control"
field is used to store arguments that are shared by all the commands,
rather than command specific arguments. Let "run-oob" be the first.
However, it failed to reject unknown members of "control". For
instance, in QMP command
{"execute": "query-name", "id": 42, "control": {"crap": true}}
"crap" gets silently ignored.
Instead of fixing this, revert the general "control" mechanism
(because YAGNI), and do it the way I initially proposed, with key
"exec-oob". Simpler code, simpler interface.
An out-of-band command
{"execute": "migrate-pause", "id": 42, "control": {"run-oob": true}}
becomes
{"exec-oob": "migrate-pause", "id": 42}
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-13-armbru@redhat.com>
[Commit message typo fixed]
Commit cf869d5317 "qmp: support out-of-band (oob) execution"
accidentally made qemu-ga accept and ignore "control". Fix that.
Out-of-band execution in a monitor that doesn't support it now fails
with
{"error": {"class": "GenericError", "desc": "QMP input member 'control' is unexpected"}}
instead of
{"error": {"class": "GenericError", "desc": "Please enable out-of-band first for the session during capabilities negotiation"}}
The old description is suboptimal when out-of-band cannot not be
enabled, or the command doesn't support out-of-band execution.
The new description is a bit unspecific, but it'll do.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-12-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-11-armbru@redhat.com>
Commit cf869d5317 "qmp: support out-of-band (oob) execution" changed
how we check "id":
Note that in the patch I exported qmp_dispatch_check_obj() to be
used to check the request earlier, and at the same time allowed
"id" field to be there since actually we always allow that.
The part after "and" is ill-advised: it makes qemu-ga accept and
ignore "id". Revert.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-10-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-9-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-7-armbru@redhat.com>
tests/qmp-test tests an out-of-band command overtaking a slow in-band
command. To do that, it needs:
1. An in-band command that *reliably* takes long enough to be
overtaken.
2. An out-of-band command to do the overtaking.
3. To avoid delays, a way to make the in-band command complete quickly
after it was overtaken.
To satisfy these needs, commit 469638f9cb provides the rather
peculiar oob-capable QMP command x-oob-test:
* With "lock": true, it waits for a global semaphore.
* With "lock": false, it signals the global semaphore.
To satisfy 1., the test runs x-oob-test in-band with "lock": true.
To satisfy 2. and 3., it runs x-oob-test out-of-band with "lock": false.
Note that waiting for a semaphore violates the rules for oob-capable
commands. Running x-oob-test with "lock": true hangs the monitor
until you run x-oob-test with "lock": false on another monitor (which
you might not have set up).
Having an externally visible QMP command that may hang the monitor is
not nice. Let's apply a little more ingenuity to the problem. Idea:
have an existing command block on reading a FIFO special file, unblock
it by opening the FIFO for writing.
For 1., use
{"execute": "blockdev-add", "id": ID1,
"arguments": {
"driver": "blkdebug", "node-name": ID1, "config": FIFO,
"image": { "driver": "null-co"}}}
where ID1 is an arbitrary string, and FIFO is the name of the FIFO.
For 2., use
{"execute": "migrate-pause", "id": ID2, "control": {"run-oob": true}}
where ID2 is a different arbitrary string. Since there's no migration
to pause, the command will fail, but that's fine; instant failure is
still a test of out-of-band responses overtaking in-band commands.
For 3., open FIFO for writing.
Drop QMP command x-oob-test.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-6-armbru@redhat.com>
[Error checking tweaked]
These commands did not get their tests in the original commits:
- guest-get-host-name
- guest-get-timezone
- guest-get-users
Trivial tests that mostly only call the commands were added.
Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
* replace QDECREF() with qobject_unref()
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The documentation is generated only once, and doesn't know C
pre-conditions. Add 'If:' sections for top-level entities.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180703155648.11933-13-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Wrap generated code with #if/#endif using an 'ifcontext' on
QAPIGenCSnippet objects.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180703155648.11933-10-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Line breaks tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Modify the test visitor to check correct passing of values.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180703155648.11933-5-marcandre.lureau@redhat.com>
[Accidental change to roms/seabios dropped]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Accept 'if' key in top-level elements, accepted as string or list of
string type. The following patches will modify the test visitor to
check the value is correctly saved, and generate #if/#endif code (as a
single #if/endif line or a series for a list).
Example of 'if' key:
{ 'struct': 'TestIfStruct', 'data': { 'foo': 'int' },
'if': 'defined(TEST_IF_STRUCT)' }
The generated code is for now *unconditional*. Later patches generate
the conditionals.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180703155648.11933-2-marcandre.lureau@redhat.com>
[Commit message and Documentation improved]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Pre-Shared Keys (PSK) is a simpler mechanism for enabling TLS
connections than using certificates. It requires only a simple secret
key:
$ mkdir -m 0700 /tmp/keys
$ psktool -u rjones -p /tmp/keys/keys.psk
$ cat /tmp/keys/keys.psk
rjones:d543770c15ad93d76443fb56f501a31969235f47e999720ae8d2336f6a13fcbc
The key can be secretly shared between clients and servers. Clients
must specify the directory containing the "keys.psk" file and a
username (defaults to "qemu"). Servers must specify only the
directory.
Example NBD client:
$ qemu-img info \
--object tls-creds-psk,id=tls0,dir=/tmp/keys,username=rjones,endpoint=client \
--image-opts \
file.driver=nbd,file.host=localhost,file.port=10809,file.tls-creds=tls0,file.export=/
Example NBD server using qemu-nbd:
$ qemu-nbd -t -x / \
--object tls-creds-psk,id=tls0,endpoint=server,dir=/tmp/keys \
--tls-creds tls0 \
image.qcow2
Example NBD server using nbdkit:
$ nbdkit -n -e / -fv \
--tls=on --tls-psk=/tmp/keys/keys.psk \
file file=disk.img
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Bug fixes and iotest exposure of fleecing via NBD (serving a
read-only point-in-time view via blockdev-backup sync:none,
as well as serving dirty bitmaps over NBD), including a new
x-dirty-bitmap parameter when opening NBD clients as the
counterpart to x-nbd-server-add-bitmap. Also a random fix
for iscsi block_status spotted by Coverity that missed other
miscellaneous trees.
- Eric Blake: nbd/server: Fix dirty bitmap logic regression
- Eric Blake: iscsi: Avoid potential for get_status overflow
- John Snow/Vladimir Sementsov-Ogievskiy: 0/2 block: formalize and test fleecing
- Eric Blake: 0/2 test NBD bitmap export
-----BEGIN PGP SIGNATURE-----
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
iQEcBAABCAAGBQJbOtJPAAoJEKeha0olJ0NqEvwH/3FwWnlBdBvdYGgPjzGE1Atm
ofCKcyxE/2VJtxeWlZQHzs1VqSq81s7am5SdzOrIWnQekvHFcLu6/71RABiauzMd
neCvVOrXOVdktj1i1Z2Gg4BgjDmqbTDlo5ssVh/oXP0Zebi6OZFfQrB7y3cGBvui
4XI7lW9qJxt6F1FlKloXnofWRDENyo5vgdz6QjQXfauthw2T5045RIPTfiz03FCp
fbs+6K0+bKxfPdNLrqxxOZo/loYnEXbDYv6VBAIWBqztXVnMHxCqnh0YN05jwsfF
TRW0/YT8lpWarOZ1soIC6a/OGXQZbxgRhZ+Zr+Wa2jw0YNHJanU9isxi37aUqQo=
=Xivx
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-07-02' into staging
nbd patches for 2018-07-02
Bug fixes and iotest exposure of fleecing via NBD (serving a
read-only point-in-time view via blockdev-backup sync:none,
as well as serving dirty bitmaps over NBD), including a new
x-dirty-bitmap parameter when opening NBD clients as the
counterpart to x-nbd-server-add-bitmap. Also a random fix
for iscsi block_status spotted by Coverity that missed other
miscellaneous trees.
- Eric Blake: nbd/server: Fix dirty bitmap logic regression
- Eric Blake: iscsi: Avoid potential for get_status overflow
- John Snow/Vladimir Sementsov-Ogievskiy: 0/2 block: formalize and test fleecing
- Eric Blake: 0/2 test NBD bitmap export
# gpg: Signature made Tue 03 Jul 2018 02:33:03 BST
# gpg: using RSA key A7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>"
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2018-07-02:
iotests: New test 223 for exporting dirty bitmap over NBD
nbd/client: Add x-dirty-bitmap to query bitmap from server
iotests: add 222 to test basic fleecing
blockdev: enable non-root nodes for backup source
iscsi: Avoid potential for get_status overflow
nbd/server: Fix dirty bitmap logic regression
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Affects documentation and a few error messages.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-2-armbru@redhat.com>
Although this test is NOT a full test of image fleecing (as it
intentionally uses just a single block device directly exported
over NBD, rather than trying to set up a blockdev-backup job with
multiple BDS involved), it DOES prove that qemu as a server is
able to properly expose a dirty bitmap over NBD.
When coupled with image fleecing, it is then possible for a
third-party client to do an incremental backup by using
qemu-img map with the x-dirty-bitmap option to learn which parts
of the file are dirty (perhaps confusingly, they are the portions
mapped as "data":false - which is part of the reason this is
still in the x- experimental namespace), along with another
normal client (perhaps 'qemu-nbd -c' to expose the server over
/dev/nbd0 and then just use normal I/O on that block device) to
read the dirty sections.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180702191458.28741-3-eblake@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20180702194630.9360-3-jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -n '[<>][<>]= ?[1-5]0'
and modified manually.
Suggested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180625124238.25339-45-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In my Out-Of-Band test, "check -qcow2 060" fail with this:
--- /home/peterx/git/qemu/tests/qemu-iotests/060.out
+++ /home/peterx/git/qemu/bin/tests/qemu-iotests/060.out.bad
@@ -427,8 +427,8 @@
QMP_VERSION
{"return": {}}
qcow2: Image is corrupt: L2 table offset 0x2a2a2a00 unaligned (L1
index: 0); further non-fatal corruption events will be suppressed
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_IMAGE_CORRUPTED", "data": {"device": "", "msg": "L2 table offset 0x2a2a2a0
0 unaligned (L1 index: 0)", "node-name": "drive", "fatal": false}}
read failed: Input/output error
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_IMAGE_CORRUPTED", "data": {"device": "", "msg": "L2 table offset 0x2a2a2a0
0 unaligned (L1 index: 0)", "node-name": "drive", "fatal": false}}
{"return": ""}
{"return": {}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP},
"event": "SHUTDOWN", "data": {"guest": false}}
The order of the event and the in/out error line is swapped. I didn't
dig up the reason, but AFAIU what we want to verify is the event rather
than stderr. Let's drop the stderr line directly for this test.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180620073223.31964-5-peterx@redhat.com>
[Commit message touched up]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This updates the minimum required glib version to 2.40
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbNhcKAAoJEL6G67QVEE/f1gMP+wV4J+V9WJOxcXbNjP9bb1K4
gsvZrv2pKCVbcJW2o3Mc3POXExX1E2GsrAl639zpDHbNVlVFvQC51GaRO1Fc6lwh
7NRFHen0Ee6I7TvWlVXMy1YPXPZ/8EJ/1KuZerYUUyKO/R8ojEt/TbELwfl3P1LF
VZGWgs+GpwUYwiG4ZmYPQ0VsT/munZPlaK1mRQSDbGqX0KG1Vy8Q+mgbGAm+S6xh
39HtM7ecdfXbVEAnoMsp9+kRi3zXpD5zSAyVBZN2RDktMt/EHxF0pPJCqX7oq0Rn
DehXDkzaqF2ghzrSAIT3rbKBhzYIY/ny/vxnyQZ/Px0GrBdKwuUJ+pYcF2akS2hV
s98VWRw/tqt8wQKYLF/wmeL7eE/tKAHSUqFc3Ta0Rvgfq0z9Mp3b73vuswBpfjWL
+GiASRvg96n/1yqmywtCd7KtF5riOo1iyvfMFCdQ+nBHQok/6K/oWfZPyoZsCAIa
2GJBfmHzkkc98QeNO0Dp/+ckSyKIBuyTV1bDyq/8Yz7IwEd1PfRWooEgVFSHO7MU
6ddCECvKVvP0S03r5dlw4Imio39gPpeZBMmSE4ZOh/hRa89T4UqIUvSLwzat7OhA
DH1NhJsj7G1jaIxENfzBhpggg++rHCVcfc2cv28Tl7i7twyRsb7CveYA6TwD+UIg
7JT+KswUo7W9MrkKBHC+
=8tKK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/berrange/tags/min-glib-pull-request' into staging
glib: update the min required version
This updates the minimum required glib version to 2.40
# gpg: Signature made Fri 29 Jun 2018 12:24:58 BST
# gpg: using RSA key BE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>"
# gpg: aka "Daniel P. Berrange <berrange@redhat.com>"
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/min-glib-pull-request:
glib: enforce the minimum required version and warn about old APIs
glib: bump min required glib library version to 2.40
util: remove redundant include of glib.h and add osdep.h
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Not updating src_offset will result in wrong data being written to dst
image.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds a test for a temporary write failure, which simulates the
situation after werror=stop/enospc has stopped the VM. We shouldn't
leave leaked clusters behind in such cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Commit abf754fe40 updated 026.out, but forgot to also update
026.out.nocache.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
There are two useful macros that can be defined before including
glib.h that are related to the min required glib version
- GLIB_VERSION_MIN_REQUIRED
When this is defined, if code uses an API that was deprecated
in this version, or older, a compiler warning will be emitted.
This alerts maintainers to update their code to whatever new
replacement API is now recommended best practice.
- GLIB_VERSION_MAX_ALLOWED
When this is defined, if code uses an API that was introduced
in a version that is newer than the declared version, a compiler
warning will be emitted. This alerts maintainers if new code
accidentally uses functionality that won't be available on some
supported platforms.
The GLIB_VERSION_MAX_ALLOWED constant makes it a bit harder to opt
in to using specific new APIs with a GLIB_CHECK_VERSION conditional.
To workaround this Pragmas can be used to temporarily turn off the
-Wdeprecated-declarations compiler warning, while a static inline
compat function is implemented. This workaround is illustrated with the
implementation of the g_strv_contains method to satisfy the test suite.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Per supported platforms doc[1], the various min glib on relevant distros is:
RHEL-7: 2.50.3
Debian (Stretch): 2.50.3
Debian (Jessie): 2.42.1
OpenBSD (Ports): 2.54.3
FreeBSD (Ports): 2.50.3
OpenSUSE Leap 15: 2.54.3
SLE12-SP2: 2.48.2
Ubuntu (Xenial): 2.48.0
macOS (Homebrew): 2.56.0
This suggests that a minimum glib of 2.42 is a reasonable target.
The GLibC compile farm, however, uses Ubuntu 14.04 (Trusty) which only
has glib 2.40.0, and this is needed for testing during merge. Thus an
exception is made to the documented platform support policy to allow for
all three current LTS releases to be supported.
Docker jobs that not longer satisfy this new min version are removed.
[1] https://qemu.weilnetz.de/doc/qemu-doc.html#Supported-build-platforms
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Peter reported that the boot-serial tester sometimes runs into timeouts
with SPARC guests. It's currently completely unclear whether this is due
to too much load on the host machine (so that the guest really just ran
too slow), or whether there is something wrong with the guest's firmware
boot. For further debugging, we need the serial output of the guest in
case of errors, so instead of unlinking the file immediately, this is
now only done in case of success. In case of error, print the name of the
file with the serial output via g_error() (which then also calls abort()
internally to mark the test as failed).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1526977831-31129-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This allows us to use atomic-add-bench as a microbenchmark
for evaluating qemu_mutex_lock's performance.
Signed-off-by: Emilio G. Cota <cota@braap.org>
[cherry picked from https://github.com/cota/qemu/commit/f04f34df]
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180425025459.5258-2-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The "I" bit in PIO Setup and D2H FISes is exclusively a device concept
and the irqstatus register in the controller does not matter. The SATA
spec says when it should be one; for D2H FISes in practice it is always
set, while the PIO Setup FIS has several subcases that are documented in
the patch.
Also, the PIO Setup FIS interrupt is actually generated _after_ data
has been received.
Someone should probably spend some time reading the SATA specification and
figuring out the more obscure fields in the PIO Setup FIS, but this is enough
to fix SeaBIOS booting from ATAPI CD-ROMs over an AHCI controller.
Fixes: 956556e131
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20180622165159.19863-1-pbonzini@redhat.com
[Minor edit to avoid ATAPI comment ambiguity. --js]
Signed-off-by: John Snow <jsnow@redhat.com>
This commit removes the PYTHON_UTF8 workaround. The problem with setting
LC_ALL= LANG=C LC_CTYPE=en_US.UTF-8
is that the en_US.UTF-8 locale might not be available. In this case
setting above locales results in build errors even though another UTF-8
locale was originally set [1]. The only stable way of fixing the
encoding problem is by specifying the encoding in Python, like the
previous commit does.
[1] https://bugs.gentoo.org/657766
Signed-off-by: Arfrever Frehtes Taifersar Arahesis <arfrever.fta@gmail.com>
Signed-off-by: Matthias Maier <tamiko@43-1.org>
Message-Id: <20180618175958.29073-3-armbru@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
It often happens that just a few discriminator values imply extra data in
a flat union. Existing checks did not make possible to leave other values
uncovered. Such cases had to be worked around by either stating a dummy
(empty) type or introducing another (subset) discriminator enumeration.
Both options create redundant entities in qapi files for little profit.
With this patch it is not necessary anymore to add designated union
fields for every possible value of a discriminator enumeration.
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Message-Id: <1529311206-76847-2-git-send-email-anton.nefedov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This new test verifies that qdict_flatten() does not modify a shallow
clone of the given QDict.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20180611205203.2624-8-mreitz@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This restores the ability to run TCG smoke tests by using our docker
infrastructure to support cross building simple tests. It represents
the first step to making better cross-architecture testing available
straight from the source tree ;-)
v2
- fix quoting of target_compiler
- make docker.py Py3 safe
- tweak .travis.yml recipe
- don't probe docker when HAVE_USER_DOCKER not set
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAlsrRHEACgkQ+9DbCVqe
KkSRhAf9EkJLIkB0uUs6NXu8ZKbMoAFlpQtbHZ36/YvTWHW+oDOBjYkZyrRlsTMR
lIpm51wSAtVOJnScFASKbYQAD4ZTw2CUjwRqkCUhWI/vJOR7NvzZHfF+SHCNl8QE
un7N/n3pWga7vY3j5Pd9cXz14im42Lcyl/xL5juh0lpCo+dw9OTdMT49sPF3YJGC
N7uK3gIqFg24nofCAZq/YyUGzfftSkZ8x6D9k9fhHyPUGh+28sE2Ahptb6jGAYnr
IcfHSUFkHpvliEJCJ50ho3iNF/JvV57gluW/LLQua35XBJ7sgiXHlhJP6Fnkobps
Z8tzaEFGg96fFOreSs0PkqTrLhMO5w==
=BIF8
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-tcg-testing-revivial-210618-2' into staging
Add check-tcg machinary
This restores the ability to run TCG smoke tests by using our docker
infrastructure to support cross building simple tests. It represents
the first step to making better cross-architecture testing available
straight from the source tree ;-)
v2
- fix quoting of target_compiler
- make docker.py Py3 safe
- tweak .travis.yml recipe
- don't probe docker when HAVE_USER_DOCKER not set
# gpg: Signature made Thu 21 Jun 2018 07:23:45 BST
# gpg: using RSA key FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-tcg-testing-revivial-210618-2: (57 commits)
.travis.yml: add check-tcg test
tests/docker/Makefile.include: only force SID to NOCACHE if old
docker: docker.py adding age check command
tests/Makefile: call sub-makes with SKIP_DOCKER_BUILD=1
docker: docker.py add check sub-command
docker: docker.py don't conflate checksums for extra_files
docker: docker.py use "version" to probe usage
tests: add top-level make dependency for docker builds
tests/tcg/i386: extend timeout for runcom test
tests/tcg: override runners for broken tests
tests/tcg: add run, diff, and skip helper macros
tests/Makefile.include: add [build|clean|check]-tcg targets
Makefile.target: add (clean-/build-)guest-tests targets
tests/tcg/Makefile: update to be called from Makefile.target
tests/tcg: enable building for PowerPC
docker: move debian-powerpc-cross to sid based build
tests/tcg: enable building for RISCV64
tests/tcg: enable building for mips64
tests/tcg: enable building for sparc64
tests/tcg: enable building for sh4
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add experimental x-nbd-server-add-bitmap to expose a disabled
bitmap over NBD, in preparation for a pull model incremental
backup scheme. Also fix a corner case protocol issue with
NBD_CMD_BLOCK_STATUS, and add new NBD_CMD_CACHE.
- Eric Blake: tests: Simplify .gitignore
- Eric Blake: nbd/server: Reject 0-length block status request
- Vladimir Sementsov-Ogievskiy: 0/6 NBD export bitmaps
- Vladimir Sementsov-Ogievskiy: nbd/server: introduce NBD_CMD_CACHE
-----BEGIN PGP SIGNATURE-----
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
iQEcBAABCAAGBQJbK7wDAAoJEKeha0olJ0NqJuMH/0o7wzP3HWIO1pJZvhRA81AP
ZZqZy/8xO0B++keCxzgTbdr9yf6RDtYGjDrNuOcVzqfA1k9E1Cl2pPjS93TzPlzZ
PEgtG6bLXL2jCUi/8/EbZ/TLuoQx5YJc7eXTNREgXDRMLiTEg5Kcmg1n0efhr8Sl
5/bRMDmW1+atO4dJS8bEW5WLuy4IoB823cSHjWoPFRHeNCwmFREwTx2xVULVje84
JZzHYxyaBRquYLKIAew1WidhKVC4OiYtm4F5PxGBB/ZCVkeoDBX5uqZSjOUVt8y7
uOhGYQGW/qSj/s3x74NEL6IPo57Iay/Wq065DbolDx7s0y5t38TocrBnakzOBFU=
=VK6J
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-06-20-v2' into staging
nbd patches for 2018-06-20
Add experimental x-nbd-server-add-bitmap to expose a disabled
bitmap over NBD, in preparation for a pull model incremental
backup scheme. Also fix a corner case protocol issue with
NBD_CMD_BLOCK_STATUS, and add new NBD_CMD_CACHE.
- Eric Blake: tests: Simplify .gitignore
- Eric Blake: nbd/server: Reject 0-length block status request
- Vladimir Sementsov-Ogievskiy: 0/6 NBD export bitmaps
- Vladimir Sementsov-Ogievskiy: nbd/server: introduce NBD_CMD_CACHE
# gpg: Signature made Thu 21 Jun 2018 15:53:55 BST
# gpg: using RSA key A7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>"
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2018-06-20-v2:
nbd/server: introduce NBD_CMD_CACHE
docs/interop: add nbd.txt
qapi: new qmp command nbd-server-add-bitmap
nbd/server: implement dirty bitmap export
nbd/server: add nbd_meta_empty_or_pattern helper
nbd/server: refactor NBDExportMetaContexts
nbd/server: fix trace
nbd/server: Reject 0-length block status request
tests: Simplify .gitignore
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 0bcc8e5b was yet another instance of 'git status' reporting
dirty files after an in-tree build, thanks to the new binary
tests/check-block-qdict.
Instead of piecemeal exemptions of each new binary as they are
added, let's use git's negative globbing feature to exempt ALL
files that have a 'test-' or 'check-' prefix, except for the ones
ending in '.c' or '.sh'. We still have a couple of generated
files that then need (re-)exclusion, but the overall list is a
LOT shorter, and less prone to needing future edits.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180619203918.65450-1-eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Now we can check the age of a docker image we can be a little more
intelligent about re-building Sid images and only force NOCACHE if
it is "old".
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This is useful for querying if an image is too old.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
As we now ensure all the images we are going to use are built in the
top level make file lets not over complicate things by running the
full script again. We do run the check script just in case someone
deletes the docker image while we are running.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This command allows you to check if we need to re-build a docker
image. If the image isn't in the repository or the checksums don't
match then we return false and some text (for processing in
makefiles).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This just gets confusing especially as the helper function doesn't
even take into account any extra files (or the executable). Currently
the actual check just ignores them and also passes the result through
_dockerfile_preprocess so we fix that too.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The "images" command is a fairly heavyweight command to run as it
involves searching the whole docker file-system inventory. On a
machine with a lot of images this makes start-up fairly expensive.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
One problem with satisfying your docker dependencies in a sub-make it
you might end up trying to satisfy the dependency multiple times. This
is especially a problem with debian-sid based cross compilers and CI
setups. We solve this by doing a docker build pass at the top level
before any sub-makes are called.
We still need to satisfy dependencies in the Makefile.target call so
people can run tests from individual target directories. We introduce
a new Makefile.probe which gets called for each PROBE_TARGET and
allows us to build up the list. It does require multiply including
config-target.mak which shouldn't cause any issues as it shouldn't
define anything that clashes with config-host.mak. However we undefine
a few key variables each time around.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The Travis hardware can be a little slow and the runcom test is fairly
heavy in calculating pi. Lets double the timeout so we don't trip up
during CI by mistake.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
To get a clean run of check-tcg these tests are currently skipped:
- hello-mips for mips
- linux-test for sparc
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
As we aren't using the default runners for all the test cases it is
easy to miss out things like timeouts. To help with this we add some
helpers and use them so we only need to make core changes in one
place.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This will ensure all linux-user targets build their guest test
programs and ensure check-tcg will run the respective tests.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Now all the build infrastructure is in place we can build tests for
each guest that we support. That support mainly depends on having
cross compilers installed or docker setup. To keep all the logic for
that together we put the rules in tests/tcg/Makefile.include and
include it from the main Makefile.target.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This make is now invoked from each individual target make with the
appropriate CC and EXTRA_CFLAGS set for each guest. It then includes
additional Makefile.targets from:
- tests/tcg/multiarch (always)
- tests/tcg/$(TARGET_BASE_ARCH) (if available)
- tests/tcg/$(TARGET_NAME)
The order is important as the later Makefile's may want to suppress
TESTS from its base arch profile. Each included Makefile.target is
responsible for adding TESTS as well as defining any special build
instructions for individual tests.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>