Starting convergence to new lazy flags scheme by Darek Mihocka (www.emulators.com). The new flags code is still being validated and perfected but I try to minimize the diff between 2 versionS
Averything that required cpu.h include now has it explicitly and there are a lot of files not dependant by CPU at all which will compile a lot faster now ...
split REPEAT instructions according to opsize to speedup execution
now each REPEATABLE instruction splitted to 3 different instructions, one for 16-bit operand size, one for 32-bit and one for 64-bit. Choosing of correct instruction occure in fetchdecode step.
2. Fixed bug
[ 989478 ] I-Cache and undefined Instruktions
The L4 microkernel uses an undefined instruction to
trap for a special requests into the kernel (LOCK NOP).
The handler fixes this up and gives the user a special
code page with syscall stubs. If you're not using the
I-Cache optimization everthing works find on bochs. But
if you enable the I-Cache (--enable-icache), then the
undefined opcode exception is thrown only once for ever
virtual address it occurs. See the demodisk of the
L4KA::pistachio
(http://www.l4ka.org/projects/pistachio/download.php).
In this case the pingpong benchmark of this demo is of
interest. Everything runs fine until the program tries
to spawn a new task for its measurements. This new task
shares the code of the creating program. But the new
task stops executing at the undefined instruction
explained above and no exception is thrown.
- it works only on x86 with gcc2.95+
- uses the GCC function atribute "regparm(n)" to declare that certain
functions use the register calling convention
- performance improvement is about 6%
1) fixed the type of "hostPageAddr" and associated typecasts.
2) fixed the type of "pages" and associated typecasts (overloaded variable)
3) patch to cpu.cc to calculate "eipPageBias" correctly in 64 bit mode
* renamed CPU_ID to BX_CPU_ID.
with this new name there is no possibility for name contentions and BX_CPU_ID
definition could be moved out to NEED_CPU_REG_SHORTCUTS block
* returned back `unsigned BX_CPU::which_cpu(void)` function
* added BX_CPU_ID parameter for
BX_INSTR_PHY_READ(a20addr, len);
BX_INSTR_PHY_WRITE(a20addr, len);
now it will be
BX_INSTR_PHY_READ(cpu_id, a20addr, len);
BX_INSTR_PHY_WRITE(cpu_id, a20addr, len);
SSE/SSE2 for Stanislav. Also, some method prototypes and
skeletal functions in access.cc for read/write double quadword
features.
Also cleaned up one warning in protect_ctrl.cc for non-64 bit compiles.
There was an unused variable, only used for 64-bit.
Some things changed in the ctrl_xfer*.cc, fetchdecode*.cc,
and cpu.cc since the original patches, so I did some patch
integration by hand. Check the placement of the
macros BX_INSTR_FETCH_DECODE_COMPLETED() and BX_INSTR_OPCODE()
in cpu.cc to make sure I go them right. Also, I changed the
parameters to BX_INSTR_OPCODE() to update them to the new code.
I put some comments before each of these to help determine if
the placement is right.
These macros are only compiled in if you are gathering instrumentation
data from bochs, so they shouldn't effect others.
Fixed/updated/cleaned guest2host TLB speedups for Long mode.
I now can boot the Linux x86-64 kernel to the VFS mount message,
using all the accelerations.
the icache pageStamp check too early, before it was known
that the TLB entry would produce a physical address in
range of the normal part of physical memory. PCI accesses
were causing seg faults because of this. I haven't tested
this for PCI.
so frequently.
Coded asm() statements for INC/DEC_ERX() instructions.
Cleaned up the iCache a litle including a bug fix. The
generation ID was decrementing the whole field including
some high meta bits. That could roll over after 1 Billion
cycles. I know only decrement if the field is valid, to
save the write.
I implemented inline functions which can serve the value of
the arithmetic flags if they are cached, and redirect to
the lazy_flags.cc routines if not.
Most of this was just prep work for adding more asm() statements
for native eflags processing when on x86.
but if you hand edit cpu/cpu.h, and change BxICacheEntries,
you can try different sizes. I'll make this more flexible
with configure. For now, use "--enable-icache" with no parameters.
- Modified fetchdecode.cc/fetchdecode64.cc just enough so that
instructions which encode a direct address now use a memory
resolution function which just sticks the immediate address
into rm_addr. With cached instructions we need this.
with GCC) align them with the GCC special alignment attribute.
Since there was then one available field, I split the protection
attributes and native host pointers into their own fields.
Before, with 3 dwords per TLB entry, some entries (about 3/8)
were spanning two processor cache lines (assuming a 32-byte
cache line). Now, they all fit within one cache line.
Knocked about 1.4% off Win95 boot time, probably more off normal
software runs.
Read-Modify-Write instructions. The first read phase stores
the host pointer in the "pages" field if a direct use pointer
is available. The Write phase first checks if a pointer was
issued and uses it for a direct write if available.
I chose the "pages" field since it needs to be checked by the
write_RMW_virtual variants anyways and thus needs to be
cached anyways.
Mostly the mods where to access.cc, but I did also macro-ize
the calls to write_RMW_virtual...() in files which use it
and cpu.h. Right now, the macro is just a straight pass-through.
I tried expanding it to a quick initial check for the pointer
availability to do the write in-place, with a function call
as a fall-back. That didn't seemed to matter at all.
Booting is not helped by this really. The upper bound of
the gain is 5 or 6%, and that's only if you have a loop that
looks like:
label:
add [eax], ebx ;; mega read-modify-write instruction
jmp label ;; intensive loop.
Kevin Lawton says he doesn't get a performance benefit.
I'm not sure if I do. Either way, the difference isn't
very large.
This code may get removed if it turns out to be useless.
direct reads/writes from native variables to the x86 (guest)
memory image. Look at the end of bochs.h. Don't know if that's
the right place to put them, but here you can extend these
macros to platform-specific asm() code if you like, or just
use the generic C code I supplied. Some platforms have special
instructions for byte-order swapping etc. Also, you can't
make any assumptions about the alignment of the pointers
passed.
mode uses the notion of the guest-to-host TLB. This has the
benefit of allowing more uniform and streamlined acceleration
code in access.cc which does not have to check if CR0.PG
is set, eliminating a few instructions per guest access.
Shaved just a little off execution time, as expected.
Also, access_linear now breaks accesses which span two pages,
into two calls the the physical memory routines, when paging
is off, just like it always has for paging on. Besides
being more uniform, this allows the physical memory access
routines to known the complete data item is contained
within a single physical page, and stop reapplying the
A20ADDR() macro to pointers as it increments them.
Perhaps things can be optimized a little more now there too...
I renamed the routines to {read,write}PhysicalPage() as
a reminder that these routines now operate on data
solely within one page.
I also added a little code so that the paging module is
notified when the A20 line is tweaked, so it can dump
whatever mappings it wants to.
I have not tested these functions, but they model the format and
acceleration principals of the byte/word/dword functions. Give them
a try on both little/big endian machines.
so that a compare of the current access could be done more
efficiently against the cached values, both in the normal
paging routines, and in the accelerated code in access.cc.
This cut down the amount of code path needed to get to
direct use of a host address nicely, and speed definitely
got a boost as a result, especially if you use the
--enable-guest2host-tlb option.
The CR0.WP flag was a real pain, because it imparts
a complication on the way protections work. Fortunately
it's not a high-change flag, so I just base the new
cached info on the current CR0.WP value, and dump
the TLB cache when it changes.
access routines in access.cc, completing the upgrade of
those routines. You do need '--enable-guest2host-tlb', before
you get the speedups for now. The guest2host mods seem pretty
solid, though I do need to see what effects the A20 line has
on this cache and the paging TLB in general.
- Paging code rehash. You must now use --enable-4meg-pages to
use 4Meg pages, with the default of disabled, since we don't well
support 4Meg pages yet. Paging table walks model a real CPU
more closely now, and I fixed some bugs in the old logic.
- Segment check redundancy elimination. After a segment is loaded,
reads and writes are marked when a segment type check succeeds, and
they are skipped thereafter, when possible.
- Repeated IO and memory string copy acceleration. Only some variants
of instructions are available on all platforms, word and dword
variants only on x86 for the moment due to alignment and endian issues.
This is compiled in currently with no option - I should add a configure
option.
- Added a guest linear address to host TLB. Actually, I just stick
the host address (mem.vector[addr] address) in the upper 29 bits
of the field 'combined_access' since they are unused. Convenient
for now. I'm only storing page frame addresses. This was the
simplest for of such a TLB. We can likely enhance this. Also,
I only accelerated the normal read/write routines in access.cc.
Could also modify the read-modify-write versions too. You must
use --enable-guest2host-tlb, to try this out. Currently speeds
up Win95 boot time by about 3.5% for me. More ground to cover...
- Minor mods to CPUI/MOV_CdRd for CMOV.
- Integrated enhancements from Volker to getHostMemAddr() for PCI
being enabled.