Once the options are populated, move the running state to
a run_agent() function.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
* fixed up an s/ga_state/s/ artifact causing segfault
* replaced g_list_free_full with g_list_foreach to maintain glib
2.22 compatibility
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Fill all default options during main(). This is a preparation patch
to allow to dump the configuration.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Move option parsing out of giant main().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Following patch will return allocated strings, so we must correctly
initialize alloc & free them. The nice side effect is that we no longer
have to check for "fixed_state_dir" to call ga_install_service() with a
NULL state dir. The default values are set after parsing the command
line options.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
'path' is already a global function, rename the variable since it's
going to be in global scope in a later patch.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
In order to avoid any confusion, let's allocate new strings when
splitting.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The function is going to be reused in a later patch.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The option parsing is going to be moved to a separate function,
use exit() consistently.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Currently we need to examine config-host.mak to determine whether
options/probes for MSI package generation had desired result. Report
this more prominently in ./configure summary as we do with other
guest agent configure options.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Previously, if building out-of-tree, the MSI build would fail since
it wasn't able to find the needed files.
Signed-off-by: Leonid Bloch <leonid@daynix.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
* fixed up commit msg formating
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Previously, running the .msi would unregister the QEMU GA VSS service if QEMU GA was already installed on the machine, and then register it only if QEMU GA was NOT previously installed. This behavior caused the service to be registered only after the INITIAL installation, and any subsequent run of the .msi (to redo, repair, or upgrade the installation) ended in the service being unregistered.
Now, the VSS service is still unregistered if QEMU GA is already installed (so that a fix or an update could be performed) but then it is registered again (if the GA is not being uninstalled) thus finishing the repair/upgrade correctly. Additionally, downgrading is now prevented. If a user would like to downgrade a version, he/she must uninstall the newer version first.
Signed-off-by: Leonid Bloch <leonid@daynix.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
This is done to follow the recommendations given here: https://msdn.microsoft.com/en-us/library/aa368269%28VS.85%29.aspx
Signed-off-by: Leonid Bloch <leonid@daynix.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
For compatibility, all the letters in GUID should be capital.
Signed-off-by: Leonid Bloch <leonid@daynix.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
guest_base must be used only in linux-user mode.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-id: 1440757421-9674-1-git-send-email-laurent@vivier.eu
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix a capitalization error in the OS X build instructions;
this was picked up in review of commit b352153f5f and intended to be
corrected before I applied it, but I accidentally didn't include it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
qemu-doc.texi: Add information on compiling source code on Mac OS X
Add information to the documentation on how to build QEMU
on Mac OS X.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed a minor capitalization error]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJV3gqbAAoJEOCMIdVndFCtA/AP/R65YstOKEB9jwpfENwL92tK
8rqjdl2dsrUJmcMI1VgoUGewwbC0n0NjIO6o9jPQ1y8GPBi0aPH/ZgxLLGwYEW4i
ASYl/XBLs3U05ylXxwxhMD1ongchMppXHqehYySs+Xbwf6R4YDPERd8ikscFHsNT
q3UkjbYmKiguGB3mnzGjNEAtvJ1FNMskr7veKugJhl/BKYE2MU+1+F57eXG+pQZ6
lW+o/3V6ly7OueO8+CTJ9UyRkomvhjW5Az3m/NtpCqBxohL/aduuRbFjG44FpoQO
3gP+TX5KjAhSnyqX4ZMYzjxCOzM/yweSnJzB3QegkprZ4zBzuXPAF/J0QAcejRMh
jF/W8o4U5X6GVG5emARxRpthl7weaDbhII8HnPt3VSZHMqp+W1pdpw32II9OsYTT
+YmMC3i5fB6Lm39oaIT6c53hGlOgsNSx6FEGNjjf7aEnVyvcn9uvefAsMwvRo3Ig
aWRY1N+x+cRj8N7BmmC3DMkGdhMDO6DFsyw7uIqp2FK2kY0J/V2qgTBvXisdF0MB
AeOQXTvevUBJ1DbKGv9u21v7c+GGGSX6+Nk1PY2G+eGR8S78qSvkk4LC5P6udJCr
/UWYOJG19tknZbQ+u+mUnnMufy2lDPB+vUNAXhAqObKfn7up3InkSNs5RwoXBUg+
YZ48+eF4dt0KkTMyXRto
=C7Zf
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/weil/tags/pull-tci-20150826' into staging
tci patch queue
# gpg: Signature made Wed 26 Aug 2015 19:51:07 BST using RSA key ID 677450AD
# gpg: Good signature from "Stefan Weil <sw@weilnetz.de>"
# gpg: aka "Stefan Weil <stefan.weil@weilnetz.de>"
# gpg: aka "Stefan Weil <stefan.weil@bib.uni-mannheim.de>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4923 6FEA 75C9 5D69 8EC2 B78A E08C 21D5 6774 50AD
* remotes/weil/tags/pull-tci-20150826:
exec-all: Translate TCI return addresses backwards too
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This subtraction of return addresses applies directly to TCI as well as
host-TCG. This fixes Linux boots for at least Microblaze, CRIS, ARM and
SH4 when using TCI.
[sw: Removed indentation for preprocessor statement]
[sw: The patch also fixes Linux boot for x86_64]
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
The _cmp_bytes variable added by commit "bea60dd ui/vnc: fix potential
memory corruption issues" can become negative. Result is (possibly
exploitable) memory corruption. Reason for that is it uses the stride
instead of bytes per scanline to apply limits.
For the server surface is is actually fine. vnc creates that itself,
there is never any padding and thus scanline length always equals stride.
For the guest surface scanline length and stride are typically identical
too, but it doesn't has to be that way. So add and use a new variable
(guest_ll) for the guest scanline length. Also rename min_stride to
line_bytes to make more clear what it actually is. Finally sprinkle
in an assert() to make sure we never use a negative _cmp_bytes again.
Reported-by: 范祚至(库特) <zuozhi.fzz@alibaba-inc.com>
Reviewed-by: P J P <ppandit@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-7-git-send-email-peter.maydell@linaro.org
Implement the remaining stage 1 TLB invalidate operations
visible from EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-6-git-send-email-peter.maydell@linaro.org
Implement the missing TLBI operations that exist only
if EL2 is implemented.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-5-git-send-email-peter.maydell@linaro.org
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-4-git-send-email-peter.maydell@linaro.org
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-3-git-send-email-peter.maydell@linaro.org
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-2-git-send-email-peter.maydell@linaro.org
Implement the AArch32 ATS1H* operations which perform
Hyp mode stage 1 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-6-git-send-email-peter.maydell@linaro.org
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-5-git-send-email-peter.maydell@linaro.org
Some coprocessor register access functions need to be able
to report "trap to EL3 with an 'uncategorized' syndrome";
add the necessary CPAccessResult enum and handling for it.
I don't currently know of any registers that need to trap
to EL2 with the 'uncategorized' syndrome, but adding the
_EL2 enum as well is trivial and fills in what would
otherwise be an odd gap in the handling.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-4-git-send-email-peter.maydell@linaro.org
Wire up the AArch64 EL2 and EL3 address translation operations
(AT S12E1*, AT S12E0*, AT S1E2*, AT S1E3*), and correct some
errors in the ats_write64() function in previously unused code
that would have done the wrong kind of lookup for accesses from
EL3 when SCR.NS==0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-3-git-send-email-peter.maydell@linaro.org
For EL2 stage 1 translations, there is no TTBR1. We were already
handling this for 64-bit EL2; add the code to take the 'no TTBR1'
code path for 64-bit EL2 as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-2-git-send-email-peter.maydell@linaro.org
We already implemented ACTLR_EL1; add the missing ACTLR_EL2 and
ACTLR_EL3, for consistency.
Since we don't currently have any CPUs that need the EL2/EL3
versions to reset to non-zero values, implement as RAZ/WI.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-5-git-send-email-peter.maydell@linaro.org
The AFSR registers are implementation dependent auxiliary fault
status registers. We already implemented a RAZ/WI AFSR0_EL1 and
AFSR_EL1; add the missing AFSR{0,1}_EL{2,3} for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-4-git-send-email-peter.maydell@linaro.org
The AMAIR registers are for providing auxiliary implementation
defined memory attributes. We already implemented a RAZ/WI
AMAIR_EL1; add the EL2 and EL3 versions for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-3-git-send-email-peter.maydell@linaro.org
Add the AArch64 registers MAIR_EL3 and TPIDR_EL3, which are the only
two which we had implemented the 32-bit Secure equivalents of but
not the 64-bit Secure versions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-2-git-send-email-peter.maydell@linaro.org
Add the Xilinx ZynqMP SoC and EP108 machine to the maintainers
file.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: fed078103a0b02cfb3adadbe8e80e4420d554505.1436486024.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter C is leaving Xilinx, so update the maintainer list
to point to Alistair and Edgar from Xilinx and Peter's
personal email address.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 54b4c070452bac05aa3a9c1d75899bc097fef831.1436486024.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Xilinx EP108 has four separate OCM banks which are located
adjacent to each other. This patch adds the four banks to
the ZynqMP SoC.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: afa6ba31163a5d541a0bef4b0dc11f2597e0c495.1436813543.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base
and the macros GUEST_BASE and RESERVED_VA become useless: replace
them by their values.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.
Anyway, guest base use can be disabled lively by setting guest
base to 0.
CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Currently, we get to the slow path for any unaligned access in the
backend, because we effectively preserve the bottom address bits
below the alignment requirement when comparing with the TLB entry,
so any non-0 bit there will cause the compare to fail.
For the same number of instructions, we can instead add the access
size - 1 to the address and stick to clearing all the bottom bits.
That means that normal unaligned accesses will not fallback (the HW
will handle them fine). Only when crossing a page boundary well we
end up having a mismatch because we'll end up pointing to the next
page which cannot possibly be in that same TLB entry.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Message-Id: <1437455978.5809.2.camel@kernel.crashing.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
- to support unaligned access on host with strict alignement
- to correctly handle accesses crossing pages
x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.
For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.
On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.
[rth: Tidied calculation of the offset mask]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1436467197-2183-1-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>