now simply return a cached value which is set upon mode changes.
The biggest problem was protected_mode() which did something like:
return CR0.PM && ! EFLAGS.VM
This adds up when it was being executed many times in branch functions
etc. Now, cached values are set and sampled instead.
Used patch.disasm to do
1) clean up the disasm output to make the dispaly of extra stuff optional.
2) included the part of the patch which displays displacements as
proper addresses.
and Jas Sandys-Lumsdaine to split out common instructions into
variants which deal with the mod=11b case (Reg-Reg) and the
other cases (which do memory ops). Actually, I only split
MOV_GwEw and MOV_GdEd for now. According to some instrumentation
of a Win95 boot, they were the most frequently used opcode by far.
Essentially, when I coded a few of the instructions to use
asm()s for acceleration of the eflags, I got lazy and only
used the asm() to compute eflags and let the normal C operation
do the actual operation. Jas's patch, moved the asm()s such
that they now do the work of the operation as well.
The patches look great. The code reads a lot better as well.
Further work can be done to give the compiler more options with
register scheduling.
were simply replacements of the eflags mask constants with
the macro names already in cpu.h for asm() statements. I forgot
to use the macros for some instructions.
0x000008d5 -> EFlagsOSZAPCMask
0x000008d4 -> EFlagsOSZAPMask
Some things changed in the ctrl_xfer*.cc, fetchdecode*.cc,
and cpu.cc since the original patches, so I did some patch
integration by hand. Check the placement of the
macros BX_INSTR_FETCH_DECODE_COMPLETED() and BX_INSTR_OPCODE()
in cpu.cc to make sure I go them right. Also, I changed the
parameters to BX_INSTR_OPCODE() to update them to the new code.
I put some comments before each of these to help determine if
the placement is right.
These macros are only compiled in if you are gathering instrumentation
data from bochs, so they shouldn't effect others.
Created 64-bit versions of some branch instructions and
changed fetchdecode64.cc to use them instead. This keeps the
#ifdef pollution down for 32-bit code and made fixing them
easier. They needed to clear the upper bits of RIP for
16-bit operand sizes. They also should not have had a protection
limit check in them, especially since that field is still
32-bit in cpu.h, so there's no way to set nominal 64-bit values.
The 32-bit versions were also not honoring the upper 32-bits
of RIP.
LOOPNE64_Jb
LOOPE64_Jb
LOOP64_Jb
JCXZ64_Jb
Changed all occurances of JCC_Jw/JCC_Jd in fetchdecode64.cc to
use JCC_Jq, which was coded already. Both JMP_Jq and JCC_Jq are
now fixed w.r.t. 16-bit opsizes and upper RIP bit clearing.
63..16 when a 16-bit operand size JMP is executed. Previous
fix cleared only 63..32. I since realized, this is the case
which does parallel the 32-bit semantics.
fetching 64-bit address opcode info, which was incorrect.
Fixed. Got rid of BxImmediate_Oq. fetchdecode64.cc now
uses BxImmediateO, like the fetch routine does. Addresses which
are embedded in the opcode, have a size which depends on
the current addressing size. For long-mode, this is
either 64 (default) or 32 (AddrSize over-ride). BxImmediate_O
now conditionally fetches based on AddrSize.
64-bit bug#2: In JMP_Jq(), when the current operand size is
16-bits, the upper dword of RIP was not being cleared. The
semantics with this case are weird - one would think the
top 48 bits would be cleared, but apparently only the top
32 bits are. Anyways, I fixed this.
Replaced some of the messy immediate fetching (byte-by-byte) in
fetchdecode64.cc with ReadHost{Q,D}WordFromLittleEndian() calls
for cleanliness. Should do this for all the cases, plus
the 32-bit stuff.
Since the SYSCALL replaces the LOADALL instruction, it is incompatible with
earlier CPU types.
At moment, the SYSCALL is only enabled by x86-64 emulation, but the code
can be incorporated in IA32 only emulations.
Instructions added:
0F 05 SYSCALL (replaces LOADALL)
0F 07 SYSRET (new)
TODO: restructure #if ... so that it can be used by non x86-64 emulations.
use getB_CF() etc. getB_CF() and friends are only for a relatively
small number of cases where a true boolean/binary number (0 or 1) is required
rather than 0 or non-0 as is returned by get_CF().
loadSRegLMNominal() which should be used to load a segment register
in long-mode with nominal values which are compatible with existing
checks and expectations for descriptor cache values.
Fixed 64-bit iret to not do a descriptor fetch if SS selector is null.
Also load SS with loadSRegLMNorminal() in the same case.
was not correct (used == 0, rather than s&0xffc == 0). Also,
with a null SS selector, it was fetching the descriptor anyways.
Put more code inside the if (selector != NULL) clause.
For a temporary measure I added the local INIT_64_DESCRIPTOR
from segment_ctrl_pro.cc, and used it in the case that the
SS selector is null. We need to make a real function which
sets a descriptor in long-mode to nominal values. I'm going
to do that next... I can't stand seeing the current hacks. :^)
Fixed/updated/cleaned guest2host TLB speedups for Long mode.
I now can boot the Linux x86-64 kernel to the VFS mount message,
using all the accelerations.
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
user can turn on/off use of native host specific inline asm
statements. By default, this option is enabled, so you only
need it to disable inline asms in your compile for now.
Currently only on x86+GCC environments, will inline asm()
statements be used. Eventually, other platforms could specify
some asm()s; probably for endian issues such as byte-swapping
and unaligned memory accesses. On x86, there are some inline
asm()s which do the arithmetic EFLAGS processing so that the
lazy flags handling is somewhat bypassed. Eventually, I'll
add more, at least for the more common instructions. This
adds a little extra performance.
- return model=2 so that Linux recognizes the processor as having an APIC.
We don't really know what Hammer returns.
- in SetCR4, allow bits 9 and 10 to be written
the icache pageStamp check too early, before it was known
that the TLB entry would produce a physical address in
range of the normal part of physical memory. PCI accesses
were causing seg faults because of this. I haven't tested
this for PCI.
which says to paste getB_ with flag and then paste with (. It should
be "getB_##flag(void)". Some preprocessors are complaining about pasting
the symbol with the paren.