If the backend could not transmit a packet right away for some reason,
the packet is queued for asynchronous sending. The corresponding vq
element is tracked in the async_tx.elem field of the VirtIONetQueue,
for later freeing when the transmission is complete.
If a reset happens before completion, virtio_net_tx_complete() will push
async_tx.elem back to the guest anyway, and we end up with the inuse flag
of the vq being equal to -1. The next call to virtqueue_pop() is then
likely to fail with "Virtqueue size exceeded".
This can be reproduced easily by starting a guest with an hubport backend
that is not connected to a functional network, eg,
-device virtio-net-pci,netdev=hub0 -netdev hubport,id=hub0,hubid=0
and no other -netdev hubport,hubid=0 on the command line.
The appropriate fix is to ensure that such an asynchronous transmission
cannot survive a device reset. So for all queues, we first try to send
the packet again, and eventually we purge it if the backend still could
not deliver it.
CC: qemu-stable@nongnu.org
Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com>
Buglink: https://github.com/open-power-host-os/qemu/issues/37
Signed-off-by: Greg Kurz <groug@kaod.org>
Tested-by: R. Nageswara Sastry <nasastry@in.ibm.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Instead of using "1.0" as the system version of SMBIOS, we should use
mc->name for mach-virt machine type to be consistent other architectures.
With this patch, "dmidecode -t 1" (e.g., "-M virt-2.12,accel=kvm") will
show:
Handle 0x0100, DMI type 1, 27 bytes
System Information
Manufacturer: QEMU
Product Name: KVM Virtual Machine
Version: virt-2.12
Serial Number: Not Specified
...
instead of:
Handle 0x0100, DMI type 1, 27 bytes
System Information
Manufacturer: QEMU
Product Name: KVM Virtual Machine
Version: 1.0
Serial Number: Not Specified
...
For backward compatibility, we allow older machine types to keep "1.0"
as the default system version.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 20180322212318.7182-1-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Linux does not detect a break from this IMX serial driver as a magic
sysrq. Nor does it note a break in the port error counts.
The former is because the Linux driver uses the BRCD bit in the USR2
register to trigger the RS-232 break handler in the kernel, which is
where sysrq hooks in. The emulated UART was not setting this status
bit.
The latter is because the Linux driver expects, in addition to the BRK
bit, that the ERR bit is set when a break is read in the FIFO. A break
should also count as a frame error, so add that bit too.
Cc: Andrey Smirnov <andrew.smirnov@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Message-id: 20180320013657.25038-1-tpiepho@impinj.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use a flag on the RAMBlock to state whether it has the
UFFDIO_ZEROPAGE capability, use it when it's available.
This allows the use of postcopy on tmpfs as well as hugepage
backed files.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Utility to give the offset of a host pointer within a RAMBlock
(assuming we already know it's in that RAMBlock)
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Make qmp_pc_dimm_device_list() return sorted by start address
list of devices so that it could be reused in places that
would need sorted list*. Reuse existing pc_dimm_built_list()
to get sorted list.
While at it hide recursive callbacks from callers, so that:
qmp_pc_dimm_device_list(qdev_get_machine(), &list);
could be replaced with simpler:
list = qmp_pc_dimm_device_list();
* follow up patch will use it in build_srat()
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au> for ppc part
Reviewed-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
All PCI devices are now QOM'ified.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Introduce latency histogram statics for block devices.
For each accounted operation type, the latency region [0, +inf) is
divided into subregions by several points. Then, calculate
hits for each subregion.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180309165212.97144-2-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Having "allow-oob":true for a command does not mean that this command
will always be run in out-of-band mode. The out-of-band quick path will
only be executed if we specify the extra "run-oob" flag when sending the
QMP request:
{ "execute": "command-that-allows-oob",
"arguments": { ... },
"control": { "run-oob": true } }
The "control" key is introduced to store this extra flag. "control"
field is used to store arguments that are shared by all the commands,
rather than command specific arguments. Let "run-oob" be the first.
Note that in the patch I exported qmp_dispatch_check_obj() to be used to
check the request earlier, and at the same time allowed "id" field to be
there since actually we always allow that.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-19-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: rebase to qobject_to(), spelling fix]
Signed-off-by: Eric Blake <eblake@redhat.com>
Here "oob" stands for "Out-Of-Band". When "allow-oob" is set, it means
the command allows out-of-band execution.
The "oob" idea is proposed by Markus Armbruster in following thread:
https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg02057.html
This new "allow-oob" boolean will be exposed by "query-qmp-schema" as
well for command entries, so that QMP clients can know which commands
can be used in out-of-band calls. For example the command "migrate"
originally looks like:
{"name": "migrate", "ret-type": "17", "meta-type": "command",
"arg-type": "86"}
And it'll be changed into:
{"name": "migrate", "ret-type": "17", "allow-oob": false,
"meta-type": "command", "arg-type": "86"}
This patch only provides the QMP interface level changes. It does not
contain the real out-of-band execution implementation yet.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-18-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: rebase on introspection done by qlit]
Signed-off-by: Eric Blake <eblake@redhat.com>
There are many places where the monitor initializes its globals:
- monitor_init_qmp_commands() at the very beginning
- single function to init monitor_lock
- in the first entry of monitor_init() using "is_first_init"
Unify them a bit.
monitor_lock is not used before monitor_init() (as confirmed by code
analysis and gdb watchpoints); so we are safe delaying what was a
constructor-time initialization of the mutex into the later first call
to monitor_init().
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-8-peterx@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
A quick way to fetch string from qobject when it's a QString.
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-4-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: rebase to qobject_to() macro]
Signed-off-by: Eric Blake <eblake@redhat.com>
The only difference from qstring_get_str() is that it allows the qstring
to be NULL. If so, NULL is returned.
CC: Eric Blake <eblake@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180309090006.10018-3-peterx@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
They are no longer needed now.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20180224154033.29559-5-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
This is a dynamic casting macro that, given a QObject type, returns an
object as that type or NULL if the object is of a different type (or
NULL itself).
The macro uses lower-case letters because:
1. There does not seem to be a hard rule on whether qemu macros have to
be upper-cased,
2. The current situation in qapi/qmp is inconsistent (compare e.g.
QINCREF() vs. qdict_put()),
3. qobject_to() will evaluate its @obj parameter only once, thus it is
generally not important to the caller whether it is a macro or not,
4. I prefer it aesthetically.
The macro parameter order is chosen with typename first for
consistency with other QAPI macros like QAPI_CLONE(), as well as
for legibility (read it as "qobject to" type "applied to" obj).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20180224154033.29559-3-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
[eblake: swap parameter order to list type first, avoid clang ubsan
warning on QOBJECT(NULL) and container_of(NULL,type,base)]
Signed-off-by: Eric Blake <eblake@redhat.com>
The bcm2837 is pretty similar to the bcm2836, but it does have
some differences. Notably, the MPIDR affinity aff1 values it
sets for the CPUs are 0x0, rather than the 0xf that the bcm2836
uses, and if this is wrong Linux will not boot.
Rather than trying to have one device with properties that
configure it differently for the two cases, create two
separate QOM devices for the two SoCs. We use the same approach
as hw/arm/aspeed_soc.c and share code and have a data table
that might differ per-SoC. For the moment the two types don't
actually have different behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180313153458.26822-7-peter.maydell@linaro.org
Our BCM2836 type is really a generic one that can be any of
the bcm283x family. Rename it accordingly. We change only
the names which are visible via the header file to the
rest of the QEMU code, leaving private function names
in bcm2836.c as they are.
This is a preliminary to making bcm283x be an abstract
parent class to specific types for the bcm2836 and bcm2837.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180313153458.26822-6-peter.maydell@linaro.org
Add support for "TX complete"/TXDC interrupt generate by real HW since
it is needed to support guests other than Linux.
Based on the patch by Bill Paul as found here:
https://bugs.launchpad.net/qemu/+bug/1753314
Cc: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org
Cc: Bill Paul <wpaul@windriver.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Bill Paul <wpaul@windriver.com>
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Message-id: 20180315191141.6789-2-andrew.smirnov@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The sabrelite machine model used by qemu-system-arm is based on the
Freescale/NXP i.MX6Q processor. This SoC has an on-board ethernet
controller which is supported in QEMU using the imx_fec.c module
(actually called imx.enet for this model.)
The include/hw/arm/fsm-imx6.h file defines the interrupt vectors for the
imx.enet device like this:
#define FSL_IMX6_ENET_MAC_1588_IRQ 118
#define FSL_IMX6_ENET_MAC_IRQ 119
According to https://www.nxp.com/docs/en/reference-manual/IMX6DQRM.pdf,
page 225, in Table 3-1. ARM Cortex A9 domain interrupt summary,
interrupts are as follows.
150 ENET MAC 0 IRQ
151 ENET MAC 0 1588 Timer interrupt
where
150 - 32 == 118
151 - 32 == 119
In other words, the vector definitions in the fsl-imx6.h file are reversed.
Fixing the interrupts alone causes problems with older Linux kernels:
The Ethernet interface will fail to probe with Linux v4.9 and earlier.
Linux v4.1 and earlier will crash due to a bug in Ethernet driver probe
error handling. This is a Linux kernel problem, not a qemu problem:
the Linux kernel only worked by accident since it requested both interrupts.
For backward compatibility, generate the Ethernet interrupt on both interrupt
lines. This was shown to work from all Linux kernel releases starting with
v3.16.
Link: https://bugs.launchpad.net/qemu/+bug/1753309
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Message-id: 1520723090-22130-1-git-send-email-linux@roeck-us.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With all targets defining CPU_RESOLVING_TYPE, refactor
cpu_parse_cpu_model(type, cpu_model) to parse_cpu_model(cpu_model)
so that callers won't have to know internal resolving cpu
type. Place it in exec.c so it could be called from both
target independed vl.c and *-user/main.c.
That allows us to stop abusing cpu type from
MachineClass::default_cpu_type
as resolver class in vl.c which were confusing part of
cpu_parse_cpu_model().
Also with new parse_cpu_model(), the last users of cpu_init()
in null-machine.c and bsd/linux-user targets could be switched
to cpu_create() API and cpu_init() API will be removed by
follow up patch.
With no longer users left remove MachineState::cpu_model field,
new code should use MachineState::cpu_type instead and
leave cpu_model parsing to generic code in vl.c.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1518000027-274608-5-git-send-email-imammedo@redhat.com>
[ehabkost: Fix bsd-user build error]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
_Static_assert() allows us to specify messages, and that may come in
handy. Even without _Static_assert(), encouraging developers to put a
helpful message next to the QEMU_BUILD_BUG_* may make debugging easier
whenever it breaks.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20180224154033.29559-2-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Instantiate a QObject* from a literal QLitObject.
LitObject only supports int64_t for now. uint64_t and double aren't
implemented.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180305172951.2150-4-marcandre.lureau@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
When doing drive mirror to a low speed shared storage, if there was heavy
BLK IO write workload in VM after the 'ready' event, drive mirror block job
can't be canceled immediately, it would keep running until the heavy BLK IO
workload stopped in the VM.
Libvirt depends on the current block-job-cancel semantics, which is that
when used without a flag after the 'ready' event, the command blocks
until data is in sync. However, these semantics are awkward in other
situations, for example, people may use drive mirror for realtime
backups while still wanting to use block live migration. Libvirt cannot
start a block live migration while another drive mirror is in progress,
but the user would rather abandon the backup attempt as broken and
proceed with the live migration than be stuck waiting for the current
drive mirror backup to finish.
The drive-mirror command already includes a 'force' flag, which libvirt
does not use, although it documented the flag as only being useful to
quit a job which is paused. However, since quitting a paused job has
the same effect as abandoning a backup in a non-paused job (namely, the
destination file is not in sync, and the command completes immediately),
we can just improve the documentation to make the force flag obviously
useful.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: John Snow <jsnow@redhat.com>
Reported-by: Huaitong Han <huanhuaitong@didichuxing.com>
Signed-off-by: Huaitong Han <huanhuaitong@didichuxing.com>
Signed-off-by: Liang Li <liliangleo@didichuxing.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of automatically transitioning from PENDING to CONCLUDED, gate
the .prepare() and .commit() phases behind an explicit acknowledgement
provided by the QMP monitor if auto_finalize = false has been requested.
This allows us to perform graph changes in prepare and/or commit so that
graph changes do not occur autonomously without knowledge of the
controlling management layer.
Transactions that have reached the "PENDING" state together can all be
moved to invoke their finalization methods by issuing block_job_finalize
to any one job in the transaction.
Jobs in a transaction with mixed job->auto_finalize settings will all
remain stuck in the "PENDING" state, as if the entire transaction was
specified with auto_finalize = false. Jobs that specified
auto_finalize = true, however, will still not emit the PENDING event.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For jobs utilizing the new manual workflow, we intend to prohibit
them from modifying the block graph until the management layer provides
an explicit ACK via block-job-finalize to move the process forward.
To distinguish this runstate from "ready" or "waiting," we add a new
"pending" event and status.
For now, the transition from PENDING to CONCLUDED/ABORTING is automatic,
but a future commit will add the explicit block-job-finalize step.
Transitions:
Waiting -> Pending: Normal transition.
Pending -> Concluded: Normal transition.
Pending -> Aborting: Late transactional failures and cancellations.
Removed Transitions:
Waiting -> Concluded: Jobs must go to PENDING first.
Verbs:
Cancel: Can be applied to a pending job.
+---------+
|UNDEFINED|
+--+------+
|
+--v----+
+---------+CREATED+-----------------+
| +--+----+ |
| | |
| +--+----+ +------+ |
+---------+RUNNING<----->PAUSED| |
| +--+-+--+ +------+ |
| | | |
| | +------------------+ |
| | | |
| +--v--+ +-------+ | |
+---------+READY<------->STANDBY| | |
| +--+--+ +-------+ | |
| | | |
| +--v----+ | |
+---------+WAITING<---------------+ |
| +--+----+ |
| | |
| +--v----+ |
+---------+PENDING| |
| +--+----+ |
| | |
+--v-----+ +--v------+ |
|ABORTING+--->CONCLUDED| |
+--------+ +--+------+ |
| |
+--v-+ |
|NULL<--------------------+
+----+
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Some jobs upon finalization may need to perform some work that can
still fail. If these jobs are part of a transaction, it's important
that these callbacks fail the entire transaction.
We allow for a new callback in addition to commit/abort/clean that
allows us the opportunity to have fairly late-breaking failures
in the transactional process.
The expected flow is:
- All jobs in a transaction converge to the PENDING state,
added in a forthcoming commit.
- Upon being finalized, either automatically or explicitly
by the user, jobs prepare to complete.
- If any job fails preparation, all jobs call .abort.
- Otherwise, they succeed and call .commit.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For jobs that have reached their CONCLUDED state, prior to having their
last reference put down (meaning jobs that have completed successfully,
unsuccessfully, or have been canceled), allow the user to dismiss the
job's lingering status report via block-job-dismiss.
This gives management APIs the chance to conclusively determine if a job
failed or succeeded, even if the event broadcast was missed.
Note: block_job_do_dismiss and block_job_decommission happen to do
exactly the same thing, but they're called from different semantic
contexts, so both aliases are kept to improve readability.
Note 2: Don't worry about the 0x04 flag definition for AUTO_DISMISS, she
has a friend coming in a future patch to fill the hole where 0x02 is.
Verbs:
Dismiss: operates on CONCLUDED jobs only.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Which commands ("verbs") are appropriate for jobs in which state is
also somewhat burdensome to keep track of.
As of this commit, it looks rather useless, but begins to look more
interesting the more states we add to the STM table.
A recurring theme is that no verb will apply to an 'undefined' job.
Further, it's not presently possible to restrict the "pause" or "resume"
verbs any more than they are in this commit because of the asynchronous
nature of how jobs enter the PAUSED state; justifications for some
seemingly erroneous applications are given below.
=====
Verbs
=====
Cancel: Any state except undefined.
Pause: Any state except undefined;
'created': Requests that the job pauses as it starts.
'running': Normal usage. (PAUSED)
'paused': The job may be paused for internal reasons,
but the user may wish to force an indefinite
user-pause, so this is allowed.
'ready': Normal usage. (STANDBY)
'standby': Same logic as above.
Resume: Any state except undefined;
'created': Will lift a user's pause-on-start request.
'running': Will lift a pause request before it takes effect.
'paused': Normal usage.
'ready': Will lift a pause request before it takes effect.
'standby': Normal usage.
Set-speed: Any state except undefined, though ready may not be meaningful.
Complete: Only a 'ready' job may accept a complete request.
=======
Changes
=======
(1)
To facilitate "nice" error checking, all five major block-job verb
interfaces in blockjob.c now support an errp parameter:
- block_job_user_cancel is added as a new interface.
- block_job_user_pause gains an errp paramter
- block_job_user_resume gains an errp parameter
- block_job_set_speed already had an errp parameter.
- block_job_complete already had an errp parameter.
(2)
block-job-pause and block-job-resume will no longer no-op when trying
to pause an already paused job, or trying to resume a job that isn't
paused. These functions will now report that they did not perform the
action requested because it was not possible.
iotests have been adjusted to address this new behavior.
(3)
block-job-complete doesn't worry about checking !block_job_started,
because the permission table guards against this.
(4)
test-bdrv-drain's job implementation needs to announce that it is
'ready' now, in order to be completed.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We're about to add several new states, and booleans are becoming
unwieldly and difficult to reason about. It would help to have a
more explicit bookkeeping of the state of blockjobs. To this end,
add a new "status" field and add our existing states in a redundant
manner alongside the bools they are replacing:
UNDEFINED: Placeholder, default state. Not currently visible to QMP
unless changes occur in the future to allow creating jobs
without starting them via QMP.
CREATED: replaces !!job->co && paused && !busy
RUNNING: replaces effectively (!paused && busy)
PAUSED: Nearly redundant with info->paused, which shows pause_count.
This reports the actual status of the job, which almost always
matches the paused request status. It differs in that it is
strictly only true when the job has actually gone dormant.
READY: replaces job->ready.
STANDBY: Paused, but job->ready is true.
New state additions in coming commits will not be quite so redundant:
WAITING: Waiting on transaction. This job has finished all the work
it can until the transaction converges, fails, or is canceled.
PENDING: Pending authorization from user. This job has finished all the
work it can until the job or transaction is finalized via
block_job_finalize. This implies the transaction has converged
and left the WAITING phase.
ABORTING: Job has encountered an error condition and is in the process
of aborting.
CONCLUDED: Job has ceased all operations and has a return code available
for query and may be dismissed via block_job_dismiss.
NULL: Job has been dismissed and (should) be destroyed. Should never
be visible to QMP.
Some of these states appear somewhat superfluous, but it helps define the
expected flow of a job; so some of the states wind up being synchronous
empty transitions. Importantly, jobs can be in only one of these states
at any given time, which helps code and external users alike reason about
the current condition of a job unambiguously.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Trivial; Document what the job creation flags do,
and some general tidying.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
model all independent jobs as single job transactions.
It's one less case we have to worry about when we add more states to the
transition machine. This way, we can just treat all job lifetimes exactly
the same. This helps tighten assertions of the STM graph and removes some
conditionals that would have been needed in the coming commits adding a
more explicit job lifetime management API.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The global hack for creating SCSI devices has recently been removed,
but this apparently broke SCSI devices on some boards that were not
ready for this change yet. For the 40p machine you now get:
$ ppc64-softmmu/qemu-system-ppc64 -M 40p -cdrom x.iso
qemu-system-ppc64: -cdrom x.iso: machine type does not support if=scsi,bus=0,unit=2
Fix it by providing a lsi53c810_create() function that takes care
of calling scsi_bus_legacy_handle_cmdline() after creating the
corresponding SCSI controller.
Fixes: 1454509726
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Wang Xin <wangxinxin.wang@huawei.com>
Message-Id: <1517367668-25048-1-git-send-email-wangxinxin.wang@huawei.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
- Eric Blake: iotests: Fix stuck NBD process on 33
- Vladimir Sementsov-Ogievskiy: 0/5 nbd server fixing and refactoring before BLOCK_STATUS
- Eric Blake: nbd/server: Honor FUA request on NBD_CMD_TRIM
- Stefan Hajnoczi: 0/2 block: fix nbd-server-stop crash after blockdev-snapshot-sync
- Vladimir Sementsov-Ogievskiy: nbd block status base:allocation
-----BEGIN PGP SIGNATURE-----
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
iQEcBAABCAAGBQJaqDklAAoJEKeha0olJ0NqRsAH+waQGLA8YPwxlnpRW2kulLfC
dZXv/ocl2vGgxRrDLEL46xh1RUpapERHADk/Qun8reQpqLicd6p8VCuoOZFEj6QN
Xo98JHrKKL6AZ1rVhWUzD8G6qwgL6FGq6Eb5ty/kanf2/0igwtHmu86nOgGyc9dz
zelGPdIxyxIEjCNiLIN49iEFs+gk1hr8qp1TNMbnHlQh8moOYqdCJWNisOQowoEE
soCJ4NLnvKBtnmrxDrvtkppQKW8ukDOG/q5BkSTvAIEBH/v0ioohFUNTFkC8vmjO
8YAwlXAz6EpQuKxpEfl7vxaT19edrNIo55JO1/Gwzk50g4/Mt+AH2JM/msFcMKQ=
=i+nR
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-03-13-v2' into staging
nbd patches for 2018-03-13
- Eric Blake: iotests: Fix stuck NBD process on 33
- Vladimir Sementsov-Ogievskiy: 0/5 nbd server fixing and refactoring before BLOCK_STATUS
- Eric Blake: nbd/server: Honor FUA request on NBD_CMD_TRIM
- Stefan Hajnoczi: 0/2 block: fix nbd-server-stop crash after blockdev-snapshot-sync
- Vladimir Sementsov-Ogievskiy: nbd block status base:allocation
# gpg: Signature made Tue 13 Mar 2018 20:48:37 GMT
# gpg: using RSA key A7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>"
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2018-03-13-v2:
iotests: new test 209 for NBD BLOCK_STATUS
iotests: add file_path helper
iotests.py: tiny refactor: move system imports up
nbd: BLOCK_STATUS for standard get_block_status function: client part
block/nbd-client: save first fatal error in nbd_iter_error
nbd: BLOCK_STATUS for standard get_block_status function: server part
nbd/server: add nbd_read_opt_name helper
nbd/server: add nbd_opt_invalid helper
iotests: add 208 nbd-server + blockdev-snapshot-sync test case
block: let blk_add/remove_aio_context_notifier() tolerate BDS changes
nbd/server: Honor FUA request on NBD_CMD_TRIM
nbd/server: refactor nbd_trip: split out nbd_handle_request
nbd/server: refactor nbd_trip: cmd_read and generic reply
nbd/server: fix: check client->closing before sending reply
nbd/server: fix sparse read
nbd/server: move nbd_co_send_structured_error up
iotests: Fix stuck NBD process on 33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* SCSI fix to pass maximum transfer size (Daniel Barboza)
* chardev fixes and improved iothread support (Daniel Berrangé, Peter)
* checkpatch tweak (Eric)
* make help tweak (Marc-André)
* make more PCI NICs available with -net or -nic (myself)
* change default q35 NIC to e1000e (myself)
* SCSI support for NDOB bit (myself)
* membarrier system call support (myself)
* SuperIO refactoring (Philippe)
* miscellaneous cleanups and fixes (Thomas)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJapqaMAAoJEL/70l94x66DQoUH/Rvg+a8giz/SrEA4P8D3Cb2z
4GNbNUUoy4oU0ltD5IAMskMwpOsvl1batE0D+pKIlfO9NV4+Cj2kpgo0p9TxoYqM
VCby3wRtx27zb5nVytC6M++iIKXmeEMqXmFw61I6umddNPSl4IR3hiHEE0DM+7dV
UPIOvJeEiazyQaw3Iw+ZctNn8dDBKc/+6oxP9xRcYTaZ6hB4G9RZkqGNNSLcJkk7
R0UotdjzIZhyWMOkjIwlpTF4sWv8gsYUV4bPYKMYho5B0Obda2dBM3I1kpA8yDa/
xZ5lheOaAVBZvM5aMIcaQPa65MO9hLyXFmhMOgyfpJhLBBz6Qpa4OLLI6DeTN+0=
=UAgA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Record-replay lockstep execution, log dumper and fixes (Alex, Pavel)
* SCSI fix to pass maximum transfer size (Daniel Barboza)
* chardev fixes and improved iothread support (Daniel Berrangé, Peter)
* checkpatch tweak (Eric)
* make help tweak (Marc-André)
* make more PCI NICs available with -net or -nic (myself)
* change default q35 NIC to e1000e (myself)
* SCSI support for NDOB bit (myself)
* membarrier system call support (myself)
* SuperIO refactoring (Philippe)
* miscellaneous cleanups and fixes (Thomas)
# gpg: Signature made Mon 12 Mar 2018 16:10:52 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
tcg: fix cpu_io_recompile
replay: update documentation
replay: save vmstate of the asynchronous events
replay: don't process async events when warping the clock
scripts/replay-dump.py: replay log dumper
replay: avoid recursive call of checkpoints
replay: check return values of fwrite
replay: push replay_mutex_lock up the call tree
replay: don't destroy mutex at exit
replay: make locking visible outside replay code
replay/replay-internal.c: track holding of replay_lock
replay/replay.c: bump REPLAY_VERSION again
replay: save prior value of the host clock
replay: added replay log format description
replay: fix save/load vm for non-empty queue
replay: fixed replay_enable_events
replay: fix processing async events
cpu-exec: fix exception_index handling
hw/i386/pc: Factor out the superio code
hw/alpha/dp264: Use the TYPE_SMC37C669_SUPERIO
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# default-configs/i386-softmmu.mak
# default-configs/x86_64-softmmu.mak
iQIcBAABCAAGBQJaqBR+AAoJEL6G67QVEE/fqHQP/2qAzY8BVheN28qVMB5UoHh7
fz3IqgearImhtYJNgfp3GHTOZTgvdSONI7o+q7yw27ORh4c0zPNQpm9dBmzm9H7x
yCojRhD0gH2X1mwoLjEfCkdJpPBNP0/DP6JwNkwS/Mg6GwRQbIiN6W7OPyjtp5fF
ImlCMB9ap8aQmoCaVqbRE7B/8YpjLg5/YY3kqhz5K8NMuOl4W3/icLD/0z35M3et
cSos831a+UZqUOBKjlj7MSFYaLX40gqKvkZ5bGkh1HAPxDh/jRiU/eaMZmJDSVgY
6PuhUv0sg3F0FHd6udfQVP8ogbQf0bDyLyeP4cHiWHAWjvHTg+th2mMuFGQ9T4qs
5E0ZEiTO+iliWthvcqrtDQNlR447BnghBL5AVRCuSBLZyLX6EGlQPRlP/bLbNYDu
e2tZ4pl095vDjuDHpu0Aq7CPUw4lS3Oyipvsr2QBMs4bBsZ57UjKoiT5a/7Z5Hhk
D0r9JFXCjWYB56IiG4NMEXqeuLo+K0n9F1lLrHfSnH2b7YrKFAGYdDCw2Omxu8kB
j5wfnnhst20x2yEOSMjPyHCKIRKyxDOFvPgvjD+9s9MMBuLHtKx+jeOcJA+9DnyK
S/50+bEVEv/ef/s1Nc4wSRdI26WMwh5veTKVoSi9BoQd8BGsp+ucb/jZBQR20emI
CKHxvYuR/2ySra0P9KI8
=QeRU
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/berrange/tags/socket-next-pull-request' into staging
# gpg: Signature made Tue 13 Mar 2018 18:12:14 GMT
# gpg: using RSA key BE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>"
# gpg: aka "Daniel P. Berrange <berrange@redhat.com>"
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/socket-next-pull-request:
char: allow passing pre-opened socket file descriptor at startup
char: refactor parsing of socket address information
sockets: allow SocketAddress 'fd' to reference numeric file descriptors
sockets: check that the named file descriptor is a socket
sockets: move fd_is_socket() into common sockets code
sockets: strengthen test suite IP protocol availability checks
sockets: pull code for testing IP availability out of specific test
cutils: add qemu_strtoi & qemu_strtoui parsers for int/unsigned int types
char: don't silently skip tn3270 protocol init when TLS is enabled
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Update kernel headers (Gerd, myself)
* SEV support (Brijesh)
I have not tested non-x86 compilation, but I reordered the SEV patches
so that all non-x86-specific changes go first to catch any possible
issues (which weren't there anyway :)).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJap/4yAAoJEL/70l94x66DmPoH/igfzYkxFyIHFqzb/hQEut3e
IJA05u9DBSqqdSvL0UeLdUgyJTeDM3S5kKZqZ38BPHIudwOGtydoIM2utWtPSejf
Z+mS77+dSgchEMgf1gxmD0oZ5TrO/2pdOYfaZZuQuGmGLruKsDgz6vH3F87cfk8b
yJSJkoZkFc8C9SpwQERWYuhXn2fYFxSBFgEMc9xSFN+zqQUFqeIfOJhwZ+txjAUl
y1EKlhhVyjkxTLR++SkzhKIJ8D5cycpcY/H19gw3ghHviY/tGwNLot3bLRPbwCM6
QvrXDf4rhvFHTmmOfliCI5y6Xgj0u7IZv2fVoKXEtKk1qyfyD4ZnouYTaqP/U9I=
=Q4/y
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream-sev' into staging
* Migrate MSR_SMI_COUNT (Liran)
* Update kernel headers (Gerd, myself)
* SEV support (Brijesh)
I have not tested non-x86 compilation, but I reordered the SEV patches
so that all non-x86-specific changes go first to catch any possible
issues (which weren't there anyway :)).
# gpg: Signature made Tue 13 Mar 2018 16:37:06 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream-sev: (22 commits)
sev/i386: add sev_get_capabilities()
sev/i386: qmp: add query-sev-capabilities command
sev/i386: qmp: add query-sev-launch-measure command
sev/i386: hmp: add 'info sev' command
cpu/i386: populate CPUID 0x8000_001F when SEV is active
sev/i386: add migration blocker
sev/i386: finalize the SEV guest launch flow
sev/i386: add support to LAUNCH_MEASURE command
target/i386: encrypt bios rom
sev/i386: add command to encrypt guest memory region
sev/i386: add command to create launch memory encryption context
sev/i386: register the guest memory range which may contain encrypted data
sev/i386: add command to initialize the memory encryption context
include: add psp-sev.h header file
sev/i386: qmp: add query-sev command
target/i386: add Secure Encrypted Virtualization (SEV) object
kvm: introduce memory encryption APIs
kvm: add memory encryption context
docs: add AMD Secure Encrypted Virtualization (SEV)
machine: add memory-encryption option
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
there is no point to read fields here but not actually
checking them so drop it and read only header + dsdt/facs
addresses since it's needed later to fetch that tables.
With this cleanup we can get rid of AcpiFadtDescriptorRev3/
ACPI_FADT_COMMON_DEF which have no users left.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Extend generic build_fadt() to support rev5.1 FADT
and reuse it for 'virt' board, it would allow to
phase out usage of AcpiFadtDescriptorRev5_1 and
later ACPI_FADT_COMMON_DEF.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
It will be extended and reused by follow up patch for ARM target.
PS:
Since it's generic function now, don't patch FIRMWARE_CTRL, DSDT
fields if they don't point to tables since platform might not
provide them and use X_ variants instead if applicable.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
move FADT data initialization out of fadt_setup() into dedicated
init_fadt_data() that will set common for pc/q35 values in
AcpiFadtData structure and acpi_get_pm_info() will complement
it with pc/q35 specific values initialization.
That will allow to get rid of fadt_setup() and generalize
build_fadt() so it could be easily extended for rev5 and
reused by ARM target.
While at it also move facs/dsdt/xdsdt offsets from build_fadt()
arg list into AcpiFadtData, as they belong to the same dataset.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>