Commit Graph

76 Commits

Author SHA1 Message Date
Stanislav Shwartsman
cb0eee9456 disasm fixes 2013-10-07 19:02:53 +00:00
Stanislav Shwartsman
279c61dc67 updated + fixed instrumentation example for instr histogram, code cleanup in the cpu 2012-03-28 21:11:19 +00:00
Stanislav Shwartsman
13feb0772a - 10% emulation speedup with handlers chaining optimization implemented. The
feature is enabled by default when configure with --enable-all-optimizations
    option, to disable handlers chaining speedups configure with
        --disable-handlers-chaining
2011-08-21 14:31:08 +00:00
Stanislav Shwartsman
002c86660a reword all the CPU code in preparation for future CPU speedup implementation.
Bochs emulation can be another 10-15% faster using technique described in paper
"Fast Microcode Interpretation with Transactional Commit/Abort"
http://amas-bt.cs.virginia.edu/2011proceedings/amasbt2011-p3.pdf
2011-07-06 20:01:18 +00:00
Stanislav Shwartsman
7d80a6ebe0 Adding Id and Rev property to all files 2011-02-24 21:54:04 +00:00
Stanislav Shwartsman
bd60e0264c change Copyright to Bochs Project 2009-12-04 16:53:12 +00:00
Stanislav Shwartsman
9929e6ed78 - updated FSF address 2009-01-16 18:18:59 +00:00
Stanislav Shwartsman
a2e07ff971 - Removed --enable-guest2hos-tlb configure option. The option will be
always enabled for any Bochs configuration.
2008-12-11 21:19:38 +00:00
Stanislav Shwartsman
d7fa44d270 optimize code access detection 2008-12-05 22:34:42 +00:00
Stanislav Shwartsman
0f5f075e4d - Fixed theoretically possible pageWriteStamp overflow 2008-10-21 19:50:05 +00:00
Stanislav Shwartsman
577c8c7969 another way to do the same optimization 2008-10-08 20:40:26 +00:00
Stanislav Shwartsman
17040303f7 Optimization of repeat string 2008-10-08 20:15:37 +00:00
Stanislav Shwartsman
23933d731c Remove 4G limit optimization that didn't work quite well 2008-09-08 20:47:33 +00:00
Stanislav Shwartsman
6398ebb1d4 First step of access bits cleanup and optimization - no perf gain yet 2008-08-03 19:53:09 +00:00
Stanislav Shwartsman
65275ffc02 Remove repeat speedups from 16-bit address size methods - they not gonna speed up anyway because of segment limit issue 2008-06-25 10:34:21 +00:00
Stanislav Shwartsman
92568f7525 Faster 32-bit emulation wwith 64-bit enabled mode.
~10% speedup byu optimization of 32-bit mem access
2008-06-12 19:14:40 +00:00
Stanislav Shwartsman
ec1ff39a5f Splitted memory access methods for 32 and 64-bit code.
The 64-bit code got >10% speedup, the 32-bit code also got about 2% because laddr cacluation optimization
2008-05-10 18:10:53 +00:00
Stanislav Shwartsman
50c9674d2e Small optimization in memory access functions 2008-05-03 17:33:30 +00:00
Stanislav Shwartsman
67e534832b Remove from CPU reference to MEM object - it is only one and could be static 2008-04-27 19:49:02 +00:00
Stanislav Shwartsman
892fa99c6f - prefetch hint should be NOP when use in register mode
- #GP when trying to set reserved bits of CR4_HI in 64-bit mode
- #GP when trying to set reserved bits of EFER MSR
- clear upper part of RSI/RDI when executing rep instructions with 32-bit asize
  even if no repeat iterations were executed (because of RCX=0 for example)
- write SYSENTER_EIP_MSR and SYSENTER_ESP_MSR as 64-bit when x86_64 supported
- set MSR_FMASK reset value
- MSR_FMASK should be 32-bit only
- check for fetch permissions when doing ITLB lookup
- #GP when trying to write non-canonical address to MSR_CSTAR or MSR_LSTAR
- correct repeat instructions timing
- mark TSS busy in TR after it is loaded
2008-04-16 16:44:06 +00:00
Stanislav Shwartsman
fea49bb270 Fixed linear address wrap in legacy (not long64) mode 2008-04-07 18:39:17 +00:00
Stanislav Shwartsman
167c7075fb Use fastcall gcc attribute for all cpu execution functions - this pure "compiler helper" optimization brings additional 2% speedup to Bochs code 2008-03-22 21:29:41 +00:00
Stanislav Shwartsman
eea5e6eac5 Simplify RepeatSpeedups optimizations - restrict them only to segments which already passed access/limit validation and avoid mass of heavy checks during repeat speedup itself. 2008-03-21 20:35:46 +00:00
Stanislav Shwartsman
063d896226 Optimization in 16-bit resolve functions
Fixes for hosts which can't support misaligned memory access
2008-02-07 20:43:13 +00:00
Stanislav Shwartsman
a2897933a3 white space cleanup 2008-02-02 21:46:54 +00:00
Stanislav Shwartsman
37fbb82baa Cleanups. Move bxInstruction_c definition to separate file instr.h 2008-01-29 17:13:10 +00:00
Stanislav Shwartsman
79fc57dec8 Fixed more VCPP2008 warnings 2007-12-26 23:07:44 +00:00
Stanislav Shwartsman
38fb3d78be small cleanup in repeat code 2007-12-23 18:09:34 +00:00
Stanislav Shwartsman
5d4e32b8da Avoid pointer params for every read_virtual_* except 16-byte SSE and 10-byte x87 reads 2007-12-20 20:58:38 +00:00
Stanislav Shwartsman
b516589e4e Changes in write_virtual_* and pop_* functions -> avoid moving parameteres by pointer 2007-12-20 18:29:42 +00:00
Stanislav Shwartsman
fe2e0525da More optimization for string instructions 2007-12-17 19:52:01 +00:00
Stanislav Shwartsman
0af87ab63b Split string instructions according to the address size - simpler and faster 2007-12-17 18:48:26 +00:00
Stanislav Shwartsman
46366b5064 Speedup simulation by eliminating CPL==3 check from read/write_virtual* functions 2007-12-16 21:03:46 +00:00
Stanislav Shwartsman
1af7010e50 Optimized memory access for 64-bit mode
Starting convergence to new lazy flags scheme by Darek Mihocka (www.emulators.com). The new flags code is still being validated and perfected but I try to minimize the diff between 2 versionS
2007-11-20 17:15:33 +00:00
Stanislav Shwartsman
83f6eb6945 Changes copyrights for the files I wrote :)
Also split EqId G1 group for x86-64
2007-11-17 23:28:33 +00:00
Stanislav Shwartsman
a83b8ae843 Slight speed improvement in string functions 2007-10-29 15:39:18 +00:00
Stanislav Shwartsman
fbcdfa49c2 Cleanup 2007-10-10 22:20:32 +00:00
Stanislav Shwartsman
deb79e9675 [Bochs-developers] [PATCH] avoid RCX without BX_SUPPORT_X86_64 2007-09-27 16:11:32 +00:00
Stanislav Shwartsman
e812f81e7b Fixes in zero upper ECX 2007-09-25 16:11:32 +00:00
Stanislav Shwartsman
38d1f39c77 Converted CR0 bits to one register similar to CR4 - a bit slower but helps with other features implemntation 2007-07-09 15:16:14 +00:00
Stanislav Shwartsman
5c21f7821f Speed simulation between 3 to 5% by eliminating several checks from cpu loop.
The checks were related to repeat instructions - handle them differently
2007-01-05 13:40:47 +00:00
Stanislav Shwartsman
a4129e5341 Handle NULL_SEG_REG (no segment override) case in fetchdecode.cc 2006-05-24 20:57:37 +00:00
Stanislav Shwartsman
d69eba6c07 Split in/out instructions based on operand size 2006-05-07 18:27:36 +00:00
Stanislav Shwartsman
b8be848943 Use access_type param in getHostMemAddr, less efficient but no copy-paste at least 2006-03-26 19:39:37 +00:00
Stanislav Shwartsman
7b6c2587a9 Now devices could be compiled separatelly from CPU
Averything that required cpu.h include now has it explicitly and there are a lot of files not dependant by CPU at all which will compile a lot faster now ...
2006-03-06 22:03:16 +00:00
Stanislav Shwartsman
f096a80716 Fix code duplication for check_cs descriptor
The function will execute
 - segment is executable code segment
 - conforming/non-conforming segment priviledge checks
 - segment is present
2005-08-01 21:40:17 +00:00
Stanislav Shwartsman
01d8a97613 Try to cleanup/rewrite RepeatSpeedups optimization
This code doesn't add new speedups but makes it very easy
After some validation it could be no problem to enable repeat speedups optimization for REP MOVSx with any address size. And REP STOSx too.
2005-07-04 17:44:08 +00:00
Stanislav Shwartsman
ce8f1ade07 Some not really significant speedups 2005-06-21 17:01:21 +00:00
Stanislav Shwartsman
b25088bf2f Merge patch [1153327] ignore segment bases in x86-64 by Avi Kivity 2005-02-28 18:56:05 +00:00
Stanislav Shwartsman
41578589c1 Merge two patches by Avi Kivity (avik) 2005-02-22 18:24:19 +00:00