Commit Graph

44 Commits

Author SHA1 Message Date
Stanislav Shwartsman
7d80a6ebe0 Adding Id and Rev property to all files 2011-02-24 21:54:04 +00:00
Stanislav Shwartsman
3dfcfd0ccd Split shift opcodes | optimize SAR opcode 2010-05-18 07:28:05 +00:00
Stanislav Shwartsman
bd60e0264c change Copyright to Bochs Project 2009-12-04 16:53:12 +00:00
Stanislav Shwartsman
cfa3611a5f bugfixes, comment fixes, compilation fix in VMX 2009-06-20 20:39:51 +00:00
Stanislav Shwartsman
61cb00c149 Fixed shift flags (thanks Darek for showing the bug) 2009-06-20 09:10:48 +00:00
Stanislav Shwartsman
9929e6ed78 - updated FSF address 2009-01-16 18:18:59 +00:00
Stanislav Shwartsman
5dd02b26e3 Make even more efficient RmAddr calculation - good optimizing compiler could make more efficient code than it was before 2008-08-08 09:22:49 +00:00
Stanislav Shwartsman
167c7075fb Use fastcall gcc attribute for all cpu execution functions - this pure "compiler helper" optimization brings additional 2% speedup to Bochs code 2008-03-22 21:29:41 +00:00
Stanislav Shwartsman
a2897933a3 white space cleanup 2008-02-02 21:46:54 +00:00
Stanislav Shwartsman
37fbb82baa Cleanups. Move bxInstruction_c definition to separate file instr.h 2008-01-29 17:13:10 +00:00
Stanislav Shwartsman
d9984bb3a1 Eliminate BxResolve call from the heart of cpu loop and move into instructions that really require this calculation. Yes, it blows the code of EVERY CPU method but it has >15% speedup ! 2008-01-10 19:37:56 +00:00
Stanislav Shwartsman
eee1a9030d a bit simplify and optimize shift instructions
print failed segment info in check_cs - more debug info
2007-12-30 20:16:35 +00:00
Stanislav Shwartsman
5d4e32b8da Avoid pointer params for every read_virtual_* except 16-byte SSE and 10-byte x87 reads 2007-12-20 20:58:38 +00:00
Stanislav Shwartsman
6fcc7d34ab Next step in lazy flags optimization by Darek MihockA -
get rid of shifts from lazy flags code
2007-12-06 20:39:11 +00:00
Stanislav Shwartsman
d739cca282 small cleanup 2007-12-06 18:35:33 +00:00
Stanislav Shwartsman
d54d537f81 One more step for lazy flags optimization 2007-12-06 16:57:59 +00:00
Stanislav Shwartsman
506dc3d963 Optimize 64-bit fetchdecode prefix handling
Deparecated set_FLAG() method, setB_FLAG() method was used everywhere
Rename setB_FLAG to set_FLAG, so set_FLAG() will must receive 0/1 inly
2007-11-20 23:00:44 +00:00
Stanislav Shwartsman
5ec15df46d Split more opcodes EbIb opcodes 2007-11-17 18:08:46 +00:00
Stanislav Shwartsman
d5a58e1df2 Split more opcodes - G3 group 2007-11-17 16:20:37 +00:00
Stanislav Shwartsman
42fdd8a3a1 During Bochs benchmarking I figured out that hostasm actually slow down the emulation ... so remove this ugly code which also doesn't help :)
speedup flags update for some instructions - idea was taken from DT patch by h.johansson
2007-10-21 22:07:33 +00:00
Stanislav Shwartsman
5c3fba4399 Support access to SMRAM in memory object
Cleanup in CPU code
2006-03-26 18:58:01 +00:00
Stanislav Shwartsman
7b6c2587a9 Now devices could be compiled separatelly from CPU
Averything that required cpu.h include now has it explicitly and there are a lot of files not dependant by CPU at all which will compile a lot faster now ...
2006-03-06 22:03:16 +00:00
Stanislav Shwartsman
7022be46f5 Fix undefined flags handling for ROR and RCR instructions 2005-10-13 19:28:10 +00:00
Stanislav Shwartsman
2bfc842c09 CPU fixes by Kevin Lawton 2005-02-16 21:27:21 +00:00
Stanislav Shwartsman
d142f23242 Fixed undocumented flags handling for SHLD instruction
Added lazy flags for SHLD instruction
Bugfix and speedup in SHLD and SHRD instructions
2004-12-24 22:44:13 +00:00
Stanislav Shwartsman
3916754e30 speedup and cleanup 2004-09-04 19:37:37 +00:00
Stanislav Shwartsman
8953bfaffb Fixed flags handling for SHLD and rotate instrructions 2004-08-27 20:13:32 +00:00
Stanislav Shwartsman
c2b7f183af more correct implementation 2004-08-15 20:31:27 +00:00
Stanislav Shwartsman
eefdcece6f Speed-up BTC instruction
Speed-up SAR instruction with implementing lazy-flags for it
2004-08-15 20:12:05 +00:00
Stanislav Shwartsman
8f0cf91fff This commit is the first commit in long series of changes the have several purposes:
1. Review and commit patch

	[ 896733 ] Lazy flags, for more instructions, only 1 src op

   May be partially, but I hope to get all ideas from patch in

2. Get Bochs speedup after lazy flags optimization

3. Most important for me: improve correctness of emulation by handling several
   undocumented EFLAGS modifications. And finally pass

	UFLAGS - Undefined Flags Test v 3.0
	Copyright (C) Potemkin's Hackers Group (PHG) 1989,1995

   The test still fails on > 50% of its checks.
2004-08-09 21:28:47 +00:00
Stanislav Shwartsman
b84f0bd0f2 This was not a cleanup. Those macros were intentionally
there to offer a way to substitute more efficient code
to do the RMW cases.  At the moment, they just map to
the normal functions.

Sorry, restored the previous version ...
2002-10-25 18:26:29 +00:00
Stanislav Shwartsman
a0c1fd60e6 Just little cleanup of macro duplicating an existing code 2002-10-25 17:23:34 +00:00
Kevin Lawton
b742ccec7e Changed eflags accessors for get_?F() to use (val32 & (1<<N)) instead
of (1 & (val32>>N)), and added a getB_?F() accessor for special
  cases which need a strict binary value (exactly 0 or 1).  Most
  code only needed a value for logical comparison.  I modified the
  special cases which do need a binary number for shifting and
  comparison between flags, to use the special getB_?F() accessor.

Cleaned up memory.cc functions a little, now that all accesses
  are within a single page.

Fixed a (not very likely encountered) bug in fetchdecode.cc (and
  fetchdecode64.cc) where a 2-byte opcode starting with a prefix
  starts at the last offset on a page.  There were no checks
  on the segment overrides for a boundary condition.  I added them.

The eflags enhancements added just a tiny bit of performance.
2002-09-22 18:22:24 +00:00
Kevin Lawton
402d02974d Moved the EFLAGS.RF check and clearing of inhibit_mask code
in cpu.cc out of the main loop, and into the asynchronous
events handling.  I went through all the code paths, and
there doesn't seem to be any reason for that code to be
in the hot loop.

Added another accessor for getting instruction data, called
modC0().  A lot of instructions test whether the mod field
of mod-nnn-rm is 0xc0 or not, ie., it's a register operation
and not memory.  So I flag this in fetchdecode{,64}.cc.
This added on the order of 1% performance improvement for
a Win95 boot.

Macroized a few leftover calls to Write_RMV_virtual_xyz()
that didn't get modified in the x86-64 merge.  Really, they
just call the real function for now, but I want to have them
available to do direct writes with the guest2host TLB pointers.
2002-09-20 03:52:59 +00:00
Kevin Lawton
4e51dcae40 Converted all the remaining available separate fields in bxInstruction_c
to bitfields.  bxInstruction_c is now 24 bytes, including 4 for
the memory addr resolution function pointer, and 4 for the
execution function pointer (16 + 4 + 4).

Coded more accessors, to abstract access from most code.
2002-09-18 08:00:43 +00:00
Kevin Lawton
6723ca9bf4 Moved more separate fields in the bxInstruction_c into bitfields
with accessors.  Had to touch a number of files to update the
access using the new accessors.

Moved rm_addr to the CPU structure, to slim down bxInstruction_c
and to prevent future instruction caching from getting sprayed
with writes to individual rm_addr fields.  There only needs to
be one.  Though need to deal with instructions which have
static non-modrm addresses, but which are using rm_addr since
that will change.

bxInstruction_c is down to about 40 bytes now.  Trying to
get down to 24 bytes.
2002-09-18 05:36:48 +00:00
Kevin Lawton
07b0df2a8a Updated accessing of modrm/sib addressing information to
use accessors.  This lets me work on compressing the
size of fetch-decode structure (now called bxInstruction_c).

I've reduced it down to about 76 bytes.  We should be able
to do much better soon.  I needed the abstraction of the
accessors, so I have a lot of freedom to re-arrange things
without making massive future changes.

Lost a few percent of performance in these mods, but my
main focus was to get the abstraction.
2002-09-17 22:50:53 +00:00
Kevin Lawton
ac7ca2b035 Changed cpu64 calls to macros:
BX_READ_8BIT_REG()  --> BX_READ_8BIT_REGx()
  BX_WRITE_8BIT_REG() --> BX_WRITE_8BIT_REGx()
They use an extra parameter "extended".  I coded this
as the macro without the "x" for cpu32 compiles.  This
allows for ease of merging and code sharing.
2002-09-13 17:04:14 +00:00
Kevin Lawton
491035fcb2 I extended the guest-to-host TLB acceleration across the
Read-Modify-Write instructions.  The first read phase stores
the host pointer in the "pages" field if a direct use pointer
is available.  The Write phase first checks if a pointer was
issued and uses it for a direct write if available.

I chose the "pages" field since it needs to be checked by the
write_RMW_virtual variants anyways and thus needs to be
cached anyways.

Mostly the mods where to access.cc, but I did also macro-ize
the calls to write_RMW_virtual...() in files which use it
and cpu.h.  Right now, the macro is just a straight pass-through.
I tried expanding it to a quick initial check for the pointer
availability to do the write in-place, with a function call
as a fall-back.  That didn't seemed to matter at all.

Booting is not helped by this really.  The upper bound of
the gain is 5 or 6%, and that's only if you have a loop that
looks like:

label:
  add [eax], ebx   ;; mega read-modify-write instruction
  jmp label        ;; intensive loop.
2002-09-06 21:54:58 +00:00
Bryce Denney
daf2a9fb55 - add RCS Id to header of every file. This makes it easier to know what's
going on when someone sends in a modified file.
2001-10-03 13:10:38 +00:00
Bryce Denney
49664f7503 - parts of the SMP merge apparantly broke the debugger and this revision
tries to fix it.  The shortcuts to register names such as AX and DL are
  #defines in cpu/cpu.h, and they are defined in terms of BX_CPU_THIS_PTR.
  When BX_USE_CPU_SMF=1, this works fine.  (This is what bochs used for
  a long time, and nobody used the SMF=0 mode at all.)  To make SMP bochs
  work, I had to get SMF=0 mode working for the CPU so that there could
  be an array of cpus.

  When SMF=0 for the CPU, BX_CPU_THIS_PTR is defined to be "this->" which
  only works within methods of BX_CPU_C.  Code outside of BX_CPU_C must
  reference BX_CPU(num) instead.
- to try to enforce the correct use of AL/AX/DL/etc. shortcuts, they are
  now only #defined when "NEED_CPU_REG_SHORTCUTS" is #defined.  This is
  only done in the cpu/*.cc code.
2001-05-24 18:46:34 +00:00
Todd T.Fries
bdb89cd364 merge in BRANCH-io-cleanup.
To see the commit logs for this use either cvsweb or
cvs update -r BRANCH-io-cleanup and then 'cvs log' the various files.

In general this provides a generic interface for logging.

logfunctions:: is a class that is inherited by some classes, and also
.   allocated as a standalone global called 'genlog'.  All logging uses
.   one of the ::info(), ::error(), ::ldebug(), ::panic() methods of this
.   class through 'BX_INFO(), BX_ERROR(), BX_DEBUG(), BX_PANIC()' macros
.   respectively.
.
.   An example usage:
.     BX_INFO(("Hello, World!\n"));

iofunctions:: is a class that is allocated once by default, and assigned
as the iofunction of each logfunctions instance.  It is this class that
maintains the file descriptor and other output related code, at this
point using vfprintf().  At some future point, someone may choose to
write a gui 'console' for bochs to which messages would be redirected
simply by assigning a different iofunction class to the various logfunctions
objects.

More cleanup is coming, but this works for now.  If you want to see alot
of debugging output, in main.cc, change onoff[LOGLEV_DEBUG]=0 to =1.

Comments, bugs, flames, to me: todd@fries.net
2001-05-15 14:49:57 +00:00
Bryce Denney
a6fef54678 - update copyright dates to 2001 for all mandrake headers
- for bochs files with other header, replaced with current mandrake header
2001-04-10 02:20:02 +00:00
cvs
beff63eb32 - entered original Bochs snapshot bochs-2000_0325a.tar.gz from
ftp.bochs.com
2001-04-10 01:04:59 +00:00