these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
- return model=2 so that Linux recognizes the processor as having an APIC.
We don't really know what Hammer returns.
- in SetCR4, allow bits 9 and 10 to be written
in cpu.cc out of the main loop, and into the asynchronous
events handling. I went through all the code paths, and
there doesn't seem to be any reason for that code to be
in the hot loop.
Added another accessor for getting instruction data, called
modC0(). A lot of instructions test whether the mod field
of mod-nnn-rm is 0xc0 or not, ie., it's a register operation
and not memory. So I flag this in fetchdecode{,64}.cc.
This added on the order of 1% performance improvement for
a Win95 boot.
Macroized a few leftover calls to Write_RMV_virtual_xyz()
that didn't get modified in the x86-64 merge. Really, they
just call the real function for now, but I want to have them
available to do direct writes with the guest2host TLB pointers.
to bitfields. bxInstruction_c is now 24 bytes, including 4 for
the memory addr resolution function pointer, and 4 for the
execution function pointer (16 + 4 + 4).
Coded more accessors, to abstract access from most code.
with accessors. Had to touch a number of files to update the
access using the new accessors.
Moved rm_addr to the CPU structure, to slim down bxInstruction_c
and to prevent future instruction caching from getting sprayed
with writes to individual rm_addr fields. There only needs to
be one. Though need to deal with instructions which have
static non-modrm addresses, but which are using rm_addr since
that will change.
bxInstruction_c is down to about 40 bytes now. Trying to
get down to 24 bytes.
use accessors. This lets me work on compressing the
size of fetch-decode structure (now called bxInstruction_c).
I've reduced it down to about 76 bytes. We should be able
to do much better soon. I needed the abstraction of the
accessors, so I have a lot of freedom to re-arrange things
without making massive future changes.
Lost a few percent of performance in these mods, but my
main focus was to get the abstraction.
be used at all, and Peter didn't want it. "extdb.o" is compiled
into libcpu.a, if configured for it.
Removed a few #warnings for x86-64 compile, based on Peter's
line-item comments regarding the warnings I inserted during
the port/merge.
printing a message when a reserved bit was set, but not causing
a #GP(0). As well, I force a new PAE support option to 1 when
Hammer support is enabled.
cpu64 directories. Instead of using the macros introduced in cpu.h rev 1.37
such as GetEFlagsDFLogical and SetEFlagsDF and ClearEFlagsDF, I made inline
methods on the BX_CPU_C object that access the eflags fields. The problem
with the macros is that they cannot be used outside the BX_CPU_C object. The
macros have now been removed, and all references to eflags now use these new
accessors.
- I debated whether to put the accessors as members of the BX_CPU_C object
or members of the bx_flags_reg_t struct. I chose to make them members
of BX_CPU_C for two reasons: 1. the lazy flags are implemented as
members of BX_CPU_C, and 2. the eflags are referenced in many many places
and it is more compact without having to put eflags in front of each. (The
real problem with compactness is having to write BX_CPU_THIS_PTR in front of
everything, but that's another story.)
- Kevin pointed out a major bug in my set accessor code. What a difference a
little tilde can make! That is fixed now.
- modified: load32bitOShack.cc debug/dbg_main.cc
and in both cpu and cpu64 directories:
cpu.cc cpu.h ctrl_xfer_pro.cc debugstuff.cc exception.cc flag_ctrl.cc
flag_ctrl_pro.cc init.cc io.cc io_pro.cc proc_ctrl.cc soft_int.cc
string.cc vm8086.cc
You need to use '--enable-global-pages' to configure in support.
If you have something to boot that uses them, give them a
spin. Really the were introduced for PPro and above, but
I haven't put in any limits. CPUID and CR4 report the proper
bits when configured, regardless of --enable-cpu-level at the
moment.
All the EFLAGS bits used to be cached in separate fields. I left
a few of them in separate fields for now - might remove them
at some point also. When the arithmetic fields are known
(ie they're not in lazy mode), they are all cached in a
32-bit EFLAGS image, just like the x86 EFLAGS register expects.
All other eflags are store in the 32-bit register also, with
a few also mirrored in separate fields for now.
The reason I did this, was so that on x86 hosts, asm() statements
can be #ifdef'd in to do the calculation and get the native
eflags results very cheaply. Just to test that it works, I
coded ADD_EdId() and ADD_EwIw() with some conditionally compiled
asm()s for accelerated eflags processing and it works.
-Kevin
it can decide how to proceed. Some of those bits are necessary
to make TLB invalidation decisions. INVLPG doesn't cause
a whole TLB flush anymore, just one page. Some of the
current CPU behaviours model the P6, especially on CR0
reloads. Earlier processors kept some pre-change pre-fetched
instructions until a branch. We could probably model that
by setting a flag, and letting the revalidate_prefetch_q
function cause serialization.
The TLB flush code only invalidates entries which are not
already invalidated for the case where the TLB invalidation
ID trick is not in use.
mode uses the notion of the guest-to-host TLB. This has the
benefit of allowing more uniform and streamlined acceleration
code in access.cc which does not have to check if CR0.PG
is set, eliminating a few instructions per guest access.
Shaved just a little off execution time, as expected.
Also, access_linear now breaks accesses which span two pages,
into two calls the the physical memory routines, when paging
is off, just like it always has for paging on. Besides
being more uniform, this allows the physical memory access
routines to known the complete data item is contained
within a single physical page, and stop reapplying the
A20ADDR() macro to pointers as it increments them.
Perhaps things can be optimized a little more now there too...
I renamed the routines to {read,write}PhysicalPage() as
a reminder that these routines now operate on data
solely within one page.
I also added a little code so that the paging module is
notified when the A20 line is tweaked, so it can dump
whatever mappings it wants to.
so that a compare of the current access could be done more
efficiently against the cached values, both in the normal
paging routines, and in the accelerated code in access.cc.
This cut down the amount of code path needed to get to
direct use of a host address nicely, and speed definitely
got a boost as a result, especially if you use the
--enable-guest2host-tlb option.
The CR0.WP flag was a real pain, because it imparts
a complication on the way protections work. Fortunately
it's not a high-change flag, so I just base the new
cached info on the current CR0.WP value, and dump
the TLB cache when it changes.
- Paging code rehash. You must now use --enable-4meg-pages to
use 4Meg pages, with the default of disabled, since we don't well
support 4Meg pages yet. Paging table walks model a real CPU
more closely now, and I fixed some bugs in the old logic.
- Segment check redundancy elimination. After a segment is loaded,
reads and writes are marked when a segment type check succeeds, and
they are skipped thereafter, when possible.
- Repeated IO and memory string copy acceleration. Only some variants
of instructions are available on all platforms, word and dword
variants only on x86 for the moment due to alignment and endian issues.
This is compiled in currently with no option - I should add a configure
option.
- Added a guest linear address to host TLB. Actually, I just stick
the host address (mem.vector[addr] address) in the upper 29 bits
of the field 'combined_access' since they are unused. Convenient
for now. I'm only storing page frame addresses. This was the
simplest for of such a TLB. We can likely enhance this. Also,
I only accelerated the normal read/write routines in access.cc.
Could also modify the read-modify-write versions too. You must
use --enable-guest2host-tlb, to try this out. Currently speeds
up Win95 boot time by about 3.5% for me. More ground to cover...
- Minor mods to CPUI/MOV_CdRd for CMOV.
- Integrated enhancements from Volker to getHostMemAddr() for PCI
being enabled.
for BX_CPU_LEVEL >= 6, and to have the CMOV instructions generate
an undefined opcode exception after printing info that they were
called, if BX_CPU_LEVEL <= 5. I suppose we could have a separate
configure option, but mirroring Intel, CMOV is available as of
Pentium Pro.
For now, you have to compile with --enable-cpu-level=6 for CMOV
support to be compiled in.
tries to fix it. The shortcuts to register names such as AX and DL are
#defines in cpu/cpu.h, and they are defined in terms of BX_CPU_THIS_PTR.
When BX_USE_CPU_SMF=1, this works fine. (This is what bochs used for
a long time, and nobody used the SMF=0 mode at all.) To make SMP bochs
work, I had to get SMF=0 mode working for the CPU so that there could
be an array of cpus.
When SMF=0 for the CPU, BX_CPU_THIS_PTR is defined to be "this->" which
only works within methods of BX_CPU_C. Code outside of BX_CPU_C must
reference BX_CPU(num) instead.
- to try to enforce the correct use of AL/AX/DL/etc. shortcuts, they are
now only #defined when "NEED_CPU_REG_SHORTCUTS" is #defined. This is
only done in the cpu/*.cc code.
in BRANCH-smp-bochs revisions.
- The general task was to make multiple CPU's which communicate
through their APICs. So instead of BX_CPU and BX_MEM, we now have
BX_CPU(x) and BX_MEM(y). For an SMP simulation you have several
processors in a shared memory space, so there might be processors
BX_CPU(0..3) but only one memory space BX_MEM(0). For cosimulation,
you could have BX_CPU(0) with BX_MEM(0), then BX_CPU(1) with
BX_MEM(1). WARNING: Cosimulation is almost certainly broken by the
SMP changes.
- to simulate multiple CPUs, you have to give each CPU time to execute
in turn. This is currently implemented using debugger guards. The
cpu loop steps one CPU for a few instructions, then steps the
next CPU for a few instructions, etc.
- there is some limited support in the debugger for two CPUs, for
example printing information from each CPU when single stepping.
To see the commit logs for this use either cvsweb or
cvs update -r BRANCH-io-cleanup and then 'cvs log' the various files.
In general this provides a generic interface for logging.
logfunctions:: is a class that is inherited by some classes, and also
. allocated as a standalone global called 'genlog'. All logging uses
. one of the ::info(), ::error(), ::ldebug(), ::panic() methods of this
. class through 'BX_INFO(), BX_ERROR(), BX_DEBUG(), BX_PANIC()' macros
. respectively.
.
. An example usage:
. BX_INFO(("Hello, World!\n"));
iofunctions:: is a class that is allocated once by default, and assigned
as the iofunction of each logfunctions instance. It is this class that
maintains the file descriptor and other output related code, at this
point using vfprintf(). At some future point, someone may choose to
write a gui 'console' for bochs to which messages would be redirected
simply by assigning a different iofunction class to the various logfunctions
objects.
More cleanup is coming, but this works for now. If you want to see alot
of debugging output, in main.cc, change onoff[LOGLEV_DEBUG]=0 to =1.
Comments, bugs, flames, to me: todd@fries.net