instead of winmm being a part of GUI_LINK_OPTS_WIN32 only, it is
placed in @DEVICE_LINE_OPTS@ so that it will be used for sdl, rfb, wx,
etc.
- solve compile problems when building bximage, niclist, and any other
console based program. The compile flags returned by wx-config and
sdl-config did strange things to these console programs, for example
redefining main to SDL_main. Because I wanted to use the
configure-generated CFLAGS to compile the programs, but I wanted to
avoid including GUI specific compile options, I split up the configure's
@CFLAGS@ variable into @CFLAGS@ and @GUI_CFLAGS@, and split
@CXXFLAGS@ into @CXXFLAGS@ and @GUI_CXXFLAGS@. All programs in the
Bochs binary will use both, but the console programs will just use
@CFLAGS@ or @CXXFLAGS@.
- gui/Makefile.in, I no longer use the gui specific CFLAGS variables,
SDL_CFLAGS and WX_CXXFLAGS. These values are included in CFLAGS and
CXXFLAGS now.
- modified: configure.in, configure, all Makefile.in's
restart another one in wxWindows. Fixed that. Also, on restart, the
apic id's left over from the first run were causing panics. Fixed that.
- modified: main.cc cpu/apic.cc cpu/cpu.h cpu/init.cc
up pc_system.h. Moved all variables under the private: section,
as well as a few member functions. The string instructions
were accessing a field directly (only reads), so I indirected
that via an inline member function for better abstraction.
all available optimizations in one shot.
Finished one last case of an instruction which could but didn't use
the Read-Modify-Write variants of access.cc functions.
Started going through the integer instructions, merging obvious cases
where there are two "if (modrm==11b) {" clauses and very little
action in between, and cleaning up the aweful indentation leftover
from many years ago when those instructions were implemented using
cut-and-paste. We may get a little extra performance out of these
mods, but they'll also be easier after I'm finished to enhance
with asm() statements to knock out the lazy flags processing on x86.
now simply return a cached value which is set upon mode changes.
The biggest problem was protected_mode() which did something like:
return CR0.PM && ! EFLAGS.VM
This adds up when it was being executed many times in branch functions
etc. Now, cached values are set and sampled instead.
Used patch.disasm to do
1) clean up the disasm output to make the dispaly of extra stuff optional.
2) included the part of the patch which displays displacements as
proper addresses.
and Jas Sandys-Lumsdaine to split out common instructions into
variants which deal with the mod=11b case (Reg-Reg) and the
other cases (which do memory ops). Actually, I only split
MOV_GwEw and MOV_GdEd for now. According to some instrumentation
of a Win95 boot, they were the most frequently used opcode by far.
Essentially, when I coded a few of the instructions to use
asm()s for acceleration of the eflags, I got lazy and only
used the asm() to compute eflags and let the normal C operation
do the actual operation. Jas's patch, moved the asm()s such
that they now do the work of the operation as well.
The patches look great. The code reads a lot better as well.
Further work can be done to give the compiler more options with
register scheduling.
were simply replacements of the eflags mask constants with
the macro names already in cpu.h for asm() statements. I forgot
to use the macros for some instructions.
0x000008d5 -> EFlagsOSZAPCMask
0x000008d4 -> EFlagsOSZAPMask
Some things changed in the ctrl_xfer*.cc, fetchdecode*.cc,
and cpu.cc since the original patches, so I did some patch
integration by hand. Check the placement of the
macros BX_INSTR_FETCH_DECODE_COMPLETED() and BX_INSTR_OPCODE()
in cpu.cc to make sure I go them right. Also, I changed the
parameters to BX_INSTR_OPCODE() to update them to the new code.
I put some comments before each of these to help determine if
the placement is right.
These macros are only compiled in if you are gathering instrumentation
data from bochs, so they shouldn't effect others.
Created 64-bit versions of some branch instructions and
changed fetchdecode64.cc to use them instead. This keeps the
#ifdef pollution down for 32-bit code and made fixing them
easier. They needed to clear the upper bits of RIP for
16-bit operand sizes. They also should not have had a protection
limit check in them, especially since that field is still
32-bit in cpu.h, so there's no way to set nominal 64-bit values.
The 32-bit versions were also not honoring the upper 32-bits
of RIP.
LOOPNE64_Jb
LOOPE64_Jb
LOOP64_Jb
JCXZ64_Jb
Changed all occurances of JCC_Jw/JCC_Jd in fetchdecode64.cc to
use JCC_Jq, which was coded already. Both JMP_Jq and JCC_Jq are
now fixed w.r.t. 16-bit opsizes and upper RIP bit clearing.
63..16 when a 16-bit operand size JMP is executed. Previous
fix cleared only 63..32. I since realized, this is the case
which does parallel the 32-bit semantics.
fetching 64-bit address opcode info, which was incorrect.
Fixed. Got rid of BxImmediate_Oq. fetchdecode64.cc now
uses BxImmediateO, like the fetch routine does. Addresses which
are embedded in the opcode, have a size which depends on
the current addressing size. For long-mode, this is
either 64 (default) or 32 (AddrSize over-ride). BxImmediate_O
now conditionally fetches based on AddrSize.
64-bit bug#2: In JMP_Jq(), when the current operand size is
16-bits, the upper dword of RIP was not being cleared. The
semantics with this case are weird - one would think the
top 48 bits would be cleared, but apparently only the top
32 bits are. Anyways, I fixed this.
Replaced some of the messy immediate fetching (byte-by-byte) in
fetchdecode64.cc with ReadHost{Q,D}WordFromLittleEndian() calls
for cleanliness. Should do this for all the cases, plus
the 32-bit stuff.
Since the SYSCALL replaces the LOADALL instruction, it is incompatible with
earlier CPU types.
At moment, the SYSCALL is only enabled by x86-64 emulation, but the code
can be incorporated in IA32 only emulations.
Instructions added:
0F 05 SYSCALL (replaces LOADALL)
0F 07 SYSRET (new)
TODO: restructure #if ... so that it can be used by non x86-64 emulations.
use getB_CF() etc. getB_CF() and friends are only for a relatively
small number of cases where a true boolean/binary number (0 or 1) is required
rather than 0 or non-0 as is returned by get_CF().
loadSRegLMNominal() which should be used to load a segment register
in long-mode with nominal values which are compatible with existing
checks and expectations for descriptor cache values.
Fixed 64-bit iret to not do a descriptor fetch if SS selector is null.
Also load SS with loadSRegLMNorminal() in the same case.
was not correct (used == 0, rather than s&0xffc == 0). Also,
with a null SS selector, it was fetching the descriptor anyways.
Put more code inside the if (selector != NULL) clause.
For a temporary measure I added the local INIT_64_DESCRIPTOR
from segment_ctrl_pro.cc, and used it in the case that the
SS selector is null. We need to make a real function which
sets a descriptor in long-mode to nominal values. I'm going
to do that next... I can't stand seeing the current hacks. :^)
Fixed/updated/cleaned guest2host TLB speedups for Long mode.
I now can boot the Linux x86-64 kernel to the VFS mount message,
using all the accelerations.
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
user can turn on/off use of native host specific inline asm
statements. By default, this option is enabled, so you only
need it to disable inline asms in your compile for now.
Currently only on x86+GCC environments, will inline asm()
statements be used. Eventually, other platforms could specify
some asm()s; probably for endian issues such as byte-swapping
and unaligned memory accesses. On x86, there are some inline
asm()s which do the arithmetic EFLAGS processing so that the
lazy flags handling is somewhat bypassed. Eventually, I'll
add more, at least for the more common instructions. This
adds a little extra performance.
- return model=2 so that Linux recognizes the processor as having an APIC.
We don't really know what Hammer returns.
- in SetCR4, allow bits 9 and 10 to be written
the icache pageStamp check too early, before it was known
that the TLB entry would produce a physical address in
range of the normal part of physical memory. PCI accesses
were causing seg faults because of this. I haven't tested
this for PCI.
which says to paste getB_ with flag and then paste with (. It should
be "getB_##flag(void)". Some preprocessors are complaining about pasting
the symbol with the paren.
of (1 & (val32>>N)), and added a getB_?F() accessor for special
cases which need a strict binary value (exactly 0 or 1). Most
code only needed a value for logical comparison. I modified the
special cases which do need a binary number for shifting and
comparison between flags, to use the special getB_?F() accessor.
Cleaned up memory.cc functions a little, now that all accesses
are within a single page.
Fixed a (not very likely encountered) bug in fetchdecode.cc (and
fetchdecode64.cc) where a 2-byte opcode starting with a prefix
starts at the last offset on a page. There were no checks
on the segment overrides for a boundary condition. I added them.
The eflags enhancements added just a tiny bit of performance.
- don't allow MMX on cpu level < 5.
- require FPU support on cpu level >= 55
- don't allow MMX support without FPU support (moved this check from
cpu/i387.h to config.h)
so frequently.
Coded asm() statements for INC/DEC_ERX() instructions.
Cleaned up the iCache a litle including a bug fix. The
generation ID was decrementing the whole field including
some high meta bits. That could roll over after 1 Billion
cycles. I know only decrement if the field is valid, to
save the write.
I implemented inline functions which can serve the value of
the arithmetic flags if they are cached, and redirect to
the lazy_flags.cc routines if not.
Most of this was just prep work for adding more asm() statements
for native eflags processing when on x86.
also extended by the REX.B field on Hammer) is passed to instructions.
I rearranged the bxInstruction_c to free up a field to be used
to pass this info when mod-rm bytes are not used. This got rid
of the ugly ((i->b1 & 7) + i->rex_b) code.
Probably shaved just a very little run time off Hammer emulation,
and even less on x86-32. The resultant is a little cleaner anyways.
in cpu.cc out of the main loop, and into the asynchronous
events handling. I went through all the code paths, and
there doesn't seem to be any reason for that code to be
in the hot loop.
Added another accessor for getting instruction data, called
modC0(). A lot of instructions test whether the mod field
of mod-nnn-rm is 0xc0 or not, ie., it's a register operation
and not memory. So I flag this in fetchdecode{,64}.cc.
This added on the order of 1% performance improvement for
a Win95 boot.
Macroized a few leftover calls to Write_RMV_virtual_xyz()
that didn't get modified in the x86-64 merge. Really, they
just call the real function for now, but I want to have them
available to do direct writes with the guest2host TLB pointers.
but if you hand edit cpu/cpu.h, and change BxICacheEntries,
you can try different sizes. I'll make this more flexible
with configure. For now, use "--enable-icache" with no parameters.
- Modified fetchdecode.cc/fetchdecode64.cc just enough so that
instructions which encode a direct address now use a memory
resolution function which just sticks the immediate address
into rm_addr. With cached instructions we need this.
to bitfields. bxInstruction_c is now 24 bytes, including 4 for
the memory addr resolution function pointer, and 4 for the
execution function pointer (16 + 4 + 4).
Coded more accessors, to abstract access from most code.
with accessors. Had to touch a number of files to update the
access using the new accessors.
Moved rm_addr to the CPU structure, to slim down bxInstruction_c
and to prevent future instruction caching from getting sprayed
with writes to individual rm_addr fields. There only needs to
be one. Though need to deal with instructions which have
static non-modrm addresses, but which are using rm_addr since
that will change.
bxInstruction_c is down to about 40 bytes now. Trying to
get down to 24 bytes.
read all param values from CPU #0. The only solution I can come up with
is to change the siminterface handler function interface to pass a void*
to the callback function. I'll take care of it eventually.
use accessors. This lets me work on compressing the
size of fetch-decode structure (now called bxInstruction_c).
I've reduced it down to about 76 bytes. We should be able
to do much better soon. I needed the abstraction of the
accessors, so I have a lot of freedom to re-arrange things
without making massive future changes.
Lost a few percent of performance in these mods, but my
main focus was to get the abstraction.
no longer used. Also rearranged that struct a little
to be more compressed. Over time, I'm going to reduce
it further, for use with future accelerations.
enhancement to bochs. You can now configure with
--enable-guest2host-tlb.
Force the support of big pages (PSE) when x86-64 is configured.
Reverted back to only one kind of TLB entry style, since everything
is ported.
Fixed one bug in io.cc with as_64 and the index registers.
There are others, as noticed by Peter.
class declaration, for example:
static const unsigned os_64=0, as_64=0;
After reading some suggestions on usenet, I changed these into
enums instead, like this:
enum { os_64=0, as_64=0 };
be used at all, and Peter didn't want it. "extdb.o" is compiled
into libcpu.a, if configured for it.
Removed a few #warnings for x86-64 compile, based on Peter's
line-item comments regarding the warnings I inserted during
the port/merge.
printing a message when a reserved bit was set, but not causing
a #GP(0). As well, I force a new PAE support option to 1 when
Hammer support is enabled.
circular dependencies between 3 cpu related libs that I need
as part of this transition. I changed the "ar rv" to "ld -i -o"
to do an incremental load instead of an archive. Hope this
doesn't break any platforms. We can reset this later.
called cpu_mode. Now there is one for cpu32, but it is declared:
static const unsigned cpu_mode=BX_MODE_IA32;
This way the compiler can compile-out if-then-else clauses based
on it, allowing for easier code sharing.
Bochs debugger. The Bochs debugger calls SIM->debug_get_next_command() which
does not return until a debugger command is found. The siminterface sends an
synchronous event to the wxWindows thread with a blank to be filled in with a
debugger command. wxWindows fills in the blank and sends the synchronous
event back, and the Bochs debugger interprets it as if it was typed on
the command line. For the long term I haven't decided whether to stick with
sending text strings vs. some other method.
- so far the wxWindows debugger consists of one big dialog box that shows
all the standard registers, and a working Continue, Stop, and Step button.
- modify ParamDialog so that it is more useful as a base class, by moving
some things to protected members&fields, separating out functionality
that is most likely to be replaced into virtual functions, and making it
generally more flexible. The new CpuRegistersDialog is based on
ParamDialog.
- in wxdialog.cc, continue the convention of using wxID_HELP, wxID_OK,
wxID_CANCEL, etc. for the id's of buttons, instead of wxHELP, wxOK, etc.
which are intended to be ORred together in a bit field.
- cpu/init.cc: put ifdefs around DEFPARAMs for flags in configurations
where they don't exist. Add an eflags shadow parameter that represents all
of the bits of eflags at once. There are also boolean shadow params for
each bit.
- modified files: cpu/init.cc debug/dbg_main.cc debug/debug.h
gui/siminterface.cc gui/siminterface.h gui/wxdialog.cc gui/wxdialog.h
gui/wxmain.cc gui/wxmain.h
member functions are turned on, BX_CPU_C_PREFIX expands to nothing, and any
method that uses BX_CPU_C_PREFIX instead of explictly writing "BX_CPU_C::"
will not be a member function at all. This makes it impossible for code
outside the BX_CPU_C object to call the accessor because sometimes the method
is at ptr_to_cpu->get_EIP() and other times you'd have to do just get_EIP().
The only way I've found to solve this is to remove the BX_CPU_C_PREFIX
and write BX_CPU_C:: instead.
- in debug/dbg_main.cc I removed the EBP, EIP, ESP, SP shortcuts. Now
the accessors are used everywhere. Also I replaced a reference to
the short-lived get_erx() accessor with ones that work: get_EAX(), etc.
- with these changes the current cvs compiles with any combination of
debugger enabled/disabled, SMP enabled/disabled, and x86-64 enabled/disabled.
BX_READ_8BIT_REG() --> BX_READ_8BIT_REGx()
BX_WRITE_8BIT_REG() --> BX_WRITE_8BIT_REGx()
They use an extra parameter "extended". I coded this
as the macro without the "x" for cpu32 compiles. This
allows for ease of merging and code sharing.
to incrementally merge files. For a test, shift16.cc is always
compiled in the cpu/ directory regardless of 32/64-bit configure.
Ultimately, all files will migrate from cpu64 to cpu.
- add get_erx() method to bx_gen_reg_t which returns the erx field of the
structure (which is has a different name in cpu and cpu64). Providing
an accessor is one strategy for avoiding igly "#ifdef BX_SUPPORT_X86_64"
statements in the rest of the code.
- cpu64/init.cc: the "eflags" before get_flag and set_flag is no longer
correct. removed.
- modified files: load32bitOShack.cc logio.cc cpu/cpu.h cpu64/apic.cc
cpu64/cpu.h cpu64/init.cc cpu64/proc_ctrl.cc debug/dbg_main.cc
cpu64 directories. Instead of using the macros introduced in cpu.h rev 1.37
such as GetEFlagsDFLogical and SetEFlagsDF and ClearEFlagsDF, I made inline
methods on the BX_CPU_C object that access the eflags fields. The problem
with the macros is that they cannot be used outside the BX_CPU_C object. The
macros have now been removed, and all references to eflags now use these new
accessors.
- I debated whether to put the accessors as members of the BX_CPU_C object
or members of the bx_flags_reg_t struct. I chose to make them members
of BX_CPU_C for two reasons: 1. the lazy flags are implemented as
members of BX_CPU_C, and 2. the eflags are referenced in many many places
and it is more compact without having to put eflags in front of each. (The
real problem with compactness is having to write BX_CPU_THIS_PTR in front of
everything, but that's another story.)
- Kevin pointed out a major bug in my set accessor code. What a difference a
little tilde can make! That is fixed now.
- modified: load32bitOShack.cc debug/dbg_main.cc
and in both cpu and cpu64 directories:
cpu.cc cpu.h ctrl_xfer_pro.cc debugstuff.cc exception.cc flag_ctrl.cc
flag_ctrl_pro.cc init.cc io.cc io_pro.cc proc_ctrl.cc soft_int.cc
string.cc vm8086.cc
This adds a whole new directory cpu64 with the new emulation code.
Very few changes were necessary outside cpu64. To try it, configure
with --enable-x86-64 and make.
- also this adds Peter Tattam's external debugger interface.
- modified files: Makefile.in bochs.h config.h.in configure.in
load32bitOShack.cc logio.cc cpu/Makefile.in cpu/cpu.cc debug/dbg_main.cc
- added files: cpu/extdb.cc cpu/extdb.h and cpu64/*
> This is the bug fix to make the reset button work properly when the cpu
> is in the halt state. There is another patch in init.cc as well to clear
> async_event. If you don't do this, if a cpu goes into HLT, the only thing
> which will fix it is another interrupt. The reset button won't work.
a consistent way of accessing these flags that works both inside and
outside the BX_CPU class, I added inline accessor methods for each
flag: assert_FLAG(), clear_FLAG(), set_FLAG(value), and get_FLAG ()
that returns its value. I use assert to mean "set the value to one"
to avoid confusion, since there's also a set method that takes a value.
- the eflags access macros (e.g. GetEFlagsDFLogical, ClearEFlagsTF) are
now defined in terms of the inline accessors. In most cases it will
result in the same code anyway. The major advantage of the accesors
is that they can be used from inside or outside the BX_CPU object, while
the macros can only be used from inside.
- since almost all eflags were stored in val32 now, I went ahead and
removed the if_, rf, and vm fields. Now the val32 bit is the
"official" value for these flags, and they have accessors just like
everything else.
- init.cc: move the registration of registers until after they have been
initialized so that the initial value of each parameter is correct.
Modified files:
debug/dbg_main.cc cpu/cpu.h cpu/debugstuff.cc cpu/flag_ctrl.cc
cpu/flag_ctrl_pro.cc cpu/init.cc
You need to use '--enable-global-pages' to configure in support.
If you have something to boot that uses them, give them a
spin. Really the were introduced for PPro and above, but
I haven't put in any limits. CPUID and CR4 report the proper
bits when configured, regardless of --enable-cpu-level at the
moment.
if off, we were still reading CR3 from the TSS and reloading
it! This was causing problems with a DOS extender. When
paging is turned back on, CR3 would be incorrect.
with GCC) align them with the GCC special alignment attribute.
Since there was then one available field, I split the protection
attributes and native host pointers into their own fields.
Before, with 3 dwords per TLB entry, some entries (about 3/8)
were spanning two processor cache lines (assuming a 32-byte
cache line). Now, they all fit within one cache line.
Knocked about 1.4% off Win95 boot time, probably more off normal
software runs.
BX_READ not 0. BX_READ was 10. While I was at it, I did
change BX_{READ,WRITE,RW} to {0,1,2} rather than {10,11,12}
in case that helps optimize code.
There may be more paging checks we should do before changing
any state, to avoid receiving a page fault in the middle.
I put some extra comments in there.
to request bulk IO operations to IO devices which are bulk IO aware.
Currently, I modified only harddrv.cc to be aware. I added some
fields to the bx_devices_c class for the IO instructions to
place requests and receive responses from the IO device emulation.
Devices except the hard drive, don't monitor these fields so they
respond as normal. The hard drive now monitors these fields for
bulk requests, and if enabled, it memcpy()'s data straight from
the disk buffer to memory. This eliminates numerous inp/outp calling
sequences per disk sector.
I used the fields in bx_devices_c so that I would not have to
disrupt most IO device modules. Enhancements can be made to
other devices if they use high-bandwidth IO via in/out instructions.
All the EFLAGS bits used to be cached in separate fields. I left
a few of them in separate fields for now - might remove them
at some point also. When the arithmetic fields are known
(ie they're not in lazy mode), they are all cached in a
32-bit EFLAGS image, just like the x86 EFLAGS register expects.
All other eflags are store in the 32-bit register also, with
a few also mirrored in separate fields for now.
The reason I did this, was so that on x86 hosts, asm() statements
can be #ifdef'd in to do the calculation and get the native
eflags results very cheaply. Just to test that it works, I
coded ADD_EdId() and ADD_EwIw() with some conditionally compiled
asm()s for accelerated eflags processing and it works.
-Kevin
it can decide how to proceed. Some of those bits are necessary
to make TLB invalidation decisions. INVLPG doesn't cause
a whole TLB flush anymore, just one page. Some of the
current CPU behaviours model the P6, especially on CR0
reloads. Earlier processors kept some pre-change pre-fetched
instructions until a branch. We could probably model that
by setting a flag, and letting the revalidate_prefetch_q
function cause serialization.
The TLB flush code only invalidates entries which are not
already invalidated for the case where the TLB invalidation
ID trick is not in use.
Read-Modify-Write instructions. The first read phase stores
the host pointer in the "pages" field if a direct use pointer
is available. The Write phase first checks if a pointer was
issued and uses it for a direct write if available.
I chose the "pages" field since it needs to be checked by the
write_RMW_virtual variants anyways and thus needs to be
cached anyways.
Mostly the mods where to access.cc, but I did also macro-ize
the calls to write_RMW_virtual...() in files which use it
and cpu.h. Right now, the macro is just a straight pass-through.
I tried expanding it to a quick initial check for the pointer
availability to do the write in-place, with a function call
as a fall-back. That didn't seemed to matter at all.
Booting is not helped by this really. The upper bound of
the gain is 5 or 6%, and that's only if you have a loop that
looks like:
label:
add [eax], ebx ;; mega read-modify-write instruction
jmp label ;; intensive loop.
Kevin Lawton says he doesn't get a performance benefit.
I'm not sure if I do. Either way, the difference isn't
very large.
This code may get removed if it turns out to be useless.
- modified files: config.h.in cpu/init.cc debug/dbg_main.cc gui/control.cc
gui/siminterface.cc gui/siminterface.h gui/wxdialog.cc gui/wxdialog.h
gui/wxmain.cc gui/wxmain.h iodev/keyboard.cc
----------------------------------------------------------------------
Patch name: patch.wx-show-cpu2
Author: Bryce Denney
Date: Fri Sep 6 12:13:28 EDT 2002
Description:
Second try at implementing the "Debug:Show Cpu" and "Debug:Show
Keyboard" dialog with values that change as the simulation proceeds.
(Nobody gets to see the first try.) This is the first step toward
making something resembling a wxWindows debugger.
First, variables which are going to be visible in the CI must be
registered as parameters. For some variables, it might be acceptable
to change them from Bit32u into bx_param_num_c and access them only
with set/get methods, but for most variables it would be a horrible
pain and wreck performance.
To deal with this, I introduced the concept of a shadow parameter. A
normal parameter has its value stored inside the struct, but a shadow
parameter has only a pointer to the value. Shadow params allow you to
treat any variable as if it was a parameter, without having to change
its type and access it using get/set methods. Of course, a shadow
param's value is controlled by someone else, so it can change at any
time.
To demonstrate and test the registration of shadow parameters, I
added code in cpu/init.cc to register a few CPU registers and
code in iodev/keyboard.cc to register a few keyboard state values.
Now these parameters are visible in the Debug:Show CPU and
Debug:Show Keyboard dialog boxes.
The Debug:Show* dialog boxes are created by the ParamDialog class,
which already understands how to display each type of parameter,
including the new shadow parameters (because they are just a subclass
of a normal parameter class). I have added a ParamDialog::Refresh()
method, which rereads the value from every parameter that it is
displaying and changes the displayed value. At the moment, in the
Debug:Show CPU dialog, changing the values has no effect. However
this is trivial to add when it's time (just call CommitChanges!). It
wouldn't really make sense to change the values unless you have paused
the simulation, for example when single stepping with the debugger.
The Refresh() method must be called periodically or else the dialog
will show the initial values forever. At the moment, Refresh() is
called when the simulator sends an async event called
BX_ASYNC_EVT_REFRESH, created by a call to SIM->refresh_ci ().
Details:
- implement shadow parameter class for Bit32s, called bx_shadow_num_c.
implement shadow parameter class for Boolean, called bx_shadow_bool_c.
more to follow (I need one for every type!)
- now the simulator thread can request that the config interface refresh
its display. For now, the refresh event causes the CI to check every
parameter it is watching and change the display value. Later, it may
be worth the trouble to keep track of which parameters have actually
changed. Code in the simulator thread calls SIM->refresh_ci(), which
creates an async event called BX_ASYNC_EVT_REFRESH and sends it to
the config interface. When it arrives in the wxWindows gui thread,
it calls RefreshDialogs(), which calls the Refresh() method on any
dialogs that might need it.
- in the debugger, SIM->refresh_ci() is called before every prompt
is printed. Otherwise, the refresh would wait until the next
SIM->periodic(), which might be thousands of cycles. This way,
when you're single stepping, the dialogs update with every step.
- To improve performance, the CI has a flag (MyFrame::WantRefresh())
which tells whether it has any need for refresh events. If no
dialogs are showing that need refresh events, then no event is sent
between threads.
- add a few defaults to the param classes that affect the settings of
newly created parameters. When declaring a lot of params with
similar settings it's more compact to set the default for new params
rather than to change each one separately. default_text_format is
the printf format string for displaying numbers. default_base is
the default base for displaying numbers (0, 16, 2, etc.)
- I added to ParamDialog to make it able to display modeless dialog
boxes such as "Debug:Show CPU". The new Refresh() method queries
all the parameters for their current value and changes the value in
the wxWindows control. The ParamDialog class still needs a little
work; for example, if it's modal it should have Cancel/Ok buttons,
but if it's going to be modeless it should maybe have Apply (commit
any changes) and Close.
- bx_gen_reg cannot be declared with BX_SMF or it can't read gen_reg
when static member functions are turned on.
- use "BX_CPU_C_PREFIX" instead of "BX_CPU_C::" for get_segment_base.
- the SMF (static member function) tricks are just plain wierd. The only way to
really be sure that you're not breaking something is to try compiling it with
SMF on and with SMF off. e.g. "configure && make" and
"configure --enable-processors=2 && make".
direct reads/writes from native variables to the x86 (guest)
memory image. Look at the end of bochs.h. Don't know if that's
the right place to put them, but here you can extend these
macros to platform-specific asm() code if you like, or just
use the generic C code I supplied. Some platforms have special
instructions for byte-order swapping etc. Also, you can't
make any assumptions about the alignment of the pointers
passed.
mode uses the notion of the guest-to-host TLB. This has the
benefit of allowing more uniform and streamlined acceleration
code in access.cc which does not have to check if CR0.PG
is set, eliminating a few instructions per guest access.
Shaved just a little off execution time, as expected.
Also, access_linear now breaks accesses which span two pages,
into two calls the the physical memory routines, when paging
is off, just like it always has for paging on. Besides
being more uniform, this allows the physical memory access
routines to known the complete data item is contained
within a single physical page, and stop reapplying the
A20ADDR() macro to pointers as it increments them.
Perhaps things can be optimized a little more now there too...
I renamed the routines to {read,write}PhysicalPage() as
a reminder that these routines now operate on data
solely within one page.
I also added a little code so that the paging module is
notified when the A20 line is tweaked, so it can dump
whatever mappings it wants to.
I have not tested these functions, but they model the format and
acceleration principals of the byte/word/dword functions. Give them
a try on both little/big endian machines.
so that a compare of the current access could be done more
efficiently against the cached values, both in the normal
paging routines, and in the accelerated code in access.cc.
This cut down the amount of code path needed to get to
direct use of a host address nicely, and speed definitely
got a boost as a result, especially if you use the
--enable-guest2host-tlb option.
The CR0.WP flag was a real pain, because it imparts
a complication on the way protections work. Fortunately
it's not a high-change flag, so I just base the new
cached info on the current CR0.WP value, and dump
the TLB cache when it changes.
checks were honoring the EFLAGS.DF bit, but assuming it was always
equal to 0 (increment upward). Plus some general cleanup of the
acceleration code.
I left the default of '--enable-repeat-speedups' to disabled, but
it seems pretty solid. Definitely adds performance for disk
heavy workloads.
access routines in access.cc, completing the upgrade of
those routines. You do need '--enable-guest2host-tlb', before
you get the speedups for now. The guest2host mods seem pretty
solid, though I do need to see what effects the A20 line has
on this cache and the paging TLB in general.
added --enable-repeat-speedups with default to disabled.
Reconfigure/recompile and the speedup code will be #ifdef'd
out for now. It manifested as junk written to the VGA screen
while booting/running Windows.
Also made some more mods to the main cpu loop. Moved the
handling of EXT/errorno outside the main loop, much like
the extra EIP/ESP commits were moved, for a little better
performance.
I changed the fetch_ptr/bytesleft method of fetching to
a slightly different model, which calculates a window
for which EIP will be valid (land on the current page),
and a bias which when applied to EIP will be from
0..upper_page_limit. Speed is about the same for either
method, but a pseudo-op/threaded-interpreter will plug
in better with this and be faster.
- Paging code rehash. You must now use --enable-4meg-pages to
use 4Meg pages, with the default of disabled, since we don't well
support 4Meg pages yet. Paging table walks model a real CPU
more closely now, and I fixed some bugs in the old logic.
- Segment check redundancy elimination. After a segment is loaded,
reads and writes are marked when a segment type check succeeds, and
they are skipped thereafter, when possible.
- Repeated IO and memory string copy acceleration. Only some variants
of instructions are available on all platforms, word and dword
variants only on x86 for the moment due to alignment and endian issues.
This is compiled in currently with no option - I should add a configure
option.
- Added a guest linear address to host TLB. Actually, I just stick
the host address (mem.vector[addr] address) in the upper 29 bits
of the field 'combined_access' since they are unused. Convenient
for now. I'm only storing page frame addresses. This was the
simplest for of such a TLB. We can likely enhance this. Also,
I only accelerated the normal read/write routines in access.cc.
Could also modify the read-modify-write versions too. You must
use --enable-guest2host-tlb, to try this out. Currently speeds
up Win95 boot time by about 3.5% for me. More ground to cover...
- Minor mods to CPUI/MOV_CdRd for CMOV.
- Integrated enhancements from Volker to getHostMemAddr() for PCI
being enabled.
for BX_CPU_LEVEL >= 6, and to have the CMOV instructions generate
an undefined opcode exception after printing info that they were
called, if BX_CPU_LEVEL <= 5. I suppose we could have a separate
configure option, but mirroring Intel, CMOV is available as of
Pentium Pro.
For now, you have to compile with --enable-cpu-level=6 for CMOV
support to be compiled in.
Specific changes from the patch:
1.) renamed fdcache_eip to fdcache_ip, as it is using
the RIP instead of the EIP.
2.) added a Boolean array fdcache_is32 which uses is32
to determine icache hits. Otherwise we could run 32-bit
code as 16-bit or vice versa.
Modified Files:
config.h.in cpu/cpu.cc cpu/cpu.h memory/memory.cc
fixed in patch.smp-instr-trace for Bochs 1.3, but the patch conflicted
with the latest source. It was simple enough to just make the changes by
hand. This should fix bug [ #532321 ] SMP debug: trace-on fails
invalid. This fixes the misleading panic message:
bx_local_apic_c::match_logical_addr: cluster model addressing not
implemented, which was printed even if the OS did not request cluster
addressing.
My code did a panic if you tried to read the EOI register (the panic
message was wrong but the concept was right). However it turns out
some OSes do actually read this register--hopefully they ignore the
result. So it should not panic.
and they added a panic. Apparantly this instruction is not used very often
because it went for a long time before anyone noticed. Peter Tattam started
running into the panic while emulating his OS called Petros, and through
a comparison between vmware and bochs results he believes that enter is
doing the right thing. So, I have changed the panic into a BX_ERROR for now,
and added code to ensure that it only gets printed once per bochs run.
BEFORE it is executed. Print the registers at this time, BEFORE the
instruction, since they are the values BEFORE the instruction is executed.
The important result of this is that in TRACE output, both the instruction
causing an exception and the first instruction of the exception handler
are BOTH printed.
I'm working on getting this behavior in the debugger user-interface.
Modified Files:
cpu/cpu.cc debug/dbg_main.cc
It did an exception, and then the real code seemed to be commented out
with an #if 0...#endif. I put a panic there, asking people to please
report how they arrived at that condition, and enabled the #if 0 code.
This was pointed out by luca abeni <l_abeni@hotmail.com>.
which were generated with gcc -MM to the end of each Makefile.in
so that make understands which files depend on which. Basically,
everything depends on bochs.h, which depends on everything, which
is not ideal.
that TICK is called so I put a trace call just before each TICK.
This seems best, since the trace has a chance to print before the tick
can trigger time-based events elsewhere in the system.
[ #433759 ] virtual address checks can overflow
> Bochs has been crashing in some cases when you try to access data which
> overlaps the segment limit, when the segment limit is near the 32-bit
> boundary. The example that came up a few times is reading/writing 4 bytes
> starting at 0xffffffff when the segment limit was 0xffffffff. The
> condition used to compare offset+length-1 with the limit, but
> offset+length-1 was overflowing so the comparison went wrong. This patch
> changes the condition so that it supports all segment limits except for
> sizes 0,1,2,3 bytes. Dave and I figured that these sizes would not be
> needed, while size 0xffffffff is used quite a lot.
has run. This ensures that the prev_eip and prev_esp that is used
for tracing and breakpoint checks is correct even in the cycle after
an interrupt or trap.
> This patch fixes a number of debugger problems.
> - with trace-on, simulation time would pass 5x faster than usual, so
> interrupts and other timed events would happen at different times
> - with trace-on, breakpoints were ignored
> - with trace-on, control-C would not stop the processor and return to the
> debugger.
>
> This patch changes the execution quantum for the debugger to 1, which means
> that cpu_loop is asked to do one instruction at a time. This may cause
> bochs with the debugger to be slower than before.
>
> I haven't tested without the debugger yet, so I don't know if the timing
> of events matches or not.
in an output format similar to gdb (when you do info all-registers).
Also, if you do "info all" you get the CPU registers and the FPU
registers.
- added bx_cpu_c method called fpu_print_regs, which is implemented
in wmFPUemu_glue.cc
by thomas.petazzoni@meridon.com. Bryce introduced this bug in
revision 1.9 when split the code into separate #ifdefs for single
CPU and multiple CPU. Comments on the patch are:
> The following patch addresses a bug concerning the exception 1 (debug)
> which is being raised during HALT under certain conditions. It
> appears only on recent versions (1.2.1 or last CVS), and not on
> version 2000-0104.
BX_SUPPORT_APIC were used. To follow the pattern used by other
names like this, I changed them all to BX_SUPPORT_APIC.
Thanks to Tom Lindström for chasing this down!
BX_CPU_C bx_cpu;
BX_MEM_C bx_mem;
and when more than one processor, use
BX_CPU_C *bx_cpu_array[BX_SMP_PROCESSORS];
BX_MEM_C *bx_mem_array[BX_ADDRESS_SPACES];
The changeover is controlled by BX_SMP_PROCESSORS, but there are only
a few code changes since nearly all code uses the BX_CPU(n) and BX_MEM(n)
macros.
- This turns out to make a 10% speed difference! With this revision,
the CVS version now gets 95% of the performance of the 3/25/2000
snapshot, which I've been using as my baseline.
is the first attempt to regain the performance of pre-SMP bochs
(1.1.2). When simulating only one processor, stay in cpu_loop forever
as pre-SMP versions did. The overhead of returning from cpu_loop over
and over was slowing us down.
tries to fix it. The shortcuts to register names such as AX and DL are
#defines in cpu/cpu.h, and they are defined in terms of BX_CPU_THIS_PTR.
When BX_USE_CPU_SMF=1, this works fine. (This is what bochs used for
a long time, and nobody used the SMF=0 mode at all.) To make SMP bochs
work, I had to get SMF=0 mode working for the CPU so that there could
be an array of cpus.
When SMF=0 for the CPU, BX_CPU_THIS_PTR is defined to be "this->" which
only works within methods of BX_CPU_C. Code outside of BX_CPU_C must
reference BX_CPU(num) instead.
- to try to enforce the correct use of AL/AX/DL/etc. shortcuts, they are
now only #defined when "NEED_CPU_REG_SHORTCUTS" is #defined. This is
only done in the cpu/*.cc code.
in BRANCH-smp-bochs revisions.
- The general task was to make multiple CPU's which communicate
through their APICs. So instead of BX_CPU and BX_MEM, we now have
BX_CPU(x) and BX_MEM(y). For an SMP simulation you have several
processors in a shared memory space, so there might be processors
BX_CPU(0..3) but only one memory space BX_MEM(0). For cosimulation,
you could have BX_CPU(0) with BX_MEM(0), then BX_CPU(1) with
BX_MEM(1). WARNING: Cosimulation is almost certainly broken by the
SMP changes.
- to simulate multiple CPUs, you have to give each CPU time to execute
in turn. This is currently implemented using debugger guards. The
cpu loop steps one CPU for a few instructions, then steps the
next CPU for a few instructions, etc.
- there is some limited support in the debugger for two CPUs, for
example printing information from each CPU when single stepping.
sourceforge patch #423726. He writes:
> you'll find as attachment a little patch which make
> bochs support the double fault. currently, when 2 pages
> fault occur, bochs does not generate a double fault (as
> the Intel documentation says) but do
> generate a other page fault, which make a triple fault,
> and bochs will exit.
>
> this very little patch make bochs support this double
> fault, which is
> used in our OS in order to dynamically increse kernel
> level stacks.
To see the commit logs for this use either cvsweb or
cvs update -r BRANCH-io-cleanup and then 'cvs log' the various files.
In general this provides a generic interface for logging.
logfunctions:: is a class that is inherited by some classes, and also
. allocated as a standalone global called 'genlog'. All logging uses
. one of the ::info(), ::error(), ::ldebug(), ::panic() methods of this
. class through 'BX_INFO(), BX_ERROR(), BX_DEBUG(), BX_PANIC()' macros
. respectively.
.
. An example usage:
. BX_INFO(("Hello, World!\n"));
iofunctions:: is a class that is allocated once by default, and assigned
as the iofunction of each logfunctions instance. It is this class that
maintains the file descriptor and other output related code, at this
point using vfprintf(). At some future point, someone may choose to
write a gui 'console' for bochs to which messages would be redirected
simply by assigning a different iofunction class to the various logfunctions
objects.
More cleanup is coming, but this works for now. If you want to see alot
of debugging output, in main.cc, change onoff[LOGLEV_DEBUG]=0 to =1.
Comments, bugs, flames, to me: todd@fries.net