Stanislav Shwartsman
|
7254ea36a1
|
copyright fixes + small optimization
|
2009-10-14 20:45:29 +00:00 |
|
Stanislav Shwartsman
|
8ae803f930
|
MASKMOVDQU bug fix
|
2009-08-21 13:44:51 +00:00 |
|
Stanislav Shwartsman
|
08de514d9c
|
code cleanup for future optimization
|
2009-03-10 21:43:11 +00:00 |
|
Stanislav Shwartsman
|
9929e6ed78
|
- updated FSF address
|
2009-01-16 18:18:59 +00:00 |
|
Stanislav Shwartsman
|
489447ae57
|
Fixed FPU2MMX state transition - should be done only fater all memory faults already checked
|
2008-10-08 10:51:38 +00:00 |
|
Stanislav Shwartsman
|
8107e7f084
|
Fixed restore of FCS field
|
2008-08-16 12:19:30 +00:00 |
|
Stanislav Shwartsman
|
5dd02b26e3
|
Make even more efficient RmAddr calculation - good optimizing compiler could make more efficient code than it was before
|
2008-08-08 09:22:49 +00:00 |
|
Stanislav Shwartsman
|
709d74728d
|
Call #UD exception directly instead of UndefinedOpcode function - for future use
|
2008-07-13 15:35:10 +00:00 |
|
Stanislav Shwartsman
|
a0e66d0e4c
|
fixed variable name
|
2008-06-14 16:55:45 +00:00 |
|
Stanislav Shwartsman
|
92568f7525
|
Faster 32-bit emulation wwith 64-bit enabled mode.
~10% speedup byu optimization of 32-bit mem access
|
2008-06-12 19:14:40 +00:00 |
|
Stanislav Shwartsman
|
ec1ff39a5f
|
Splitted memory access methods for 32 and 64-bit code.
The 64-bit code got >10% speedup, the 32-bit code also got about 2% because laddr cacluation optimization
|
2008-05-10 18:10:53 +00:00 |
|
Stanislav Shwartsman
|
3634c6f892
|
Compress FPU tag word
|
2008-05-10 13:34:47 +00:00 |
|
Stanislav Shwartsman
|
4f3f8608f7
|
Fixed MASKMOVDQU instruction decoding
|
2008-04-16 05:41:43 +00:00 |
|
Stanislav Shwartsman
|
77d91d59aa
|
Inline prepare_SSE and prepare_XSAVE functions
|
2008-04-06 18:00:20 +00:00 |
|
Stanislav Shwartsman
|
420f30816d
|
inline integer saturation code - speedup for MMX/SSE integer
|
2008-04-06 13:56:22 +00:00 |
|
Stanislav Shwartsman
|
e91409704f
|
Convert EFER to val32 register, similar to other control registers
|
2008-03-31 20:56:27 +00:00 |
|
Stanislav Shwartsman
|
94f30955be
|
Fixed compilation error
|
2008-03-25 16:46:39 +00:00 |
|
Stanislav Shwartsman
|
167c7075fb
|
Use fastcall gcc attribute for all cpu execution functions - this pure "compiler helper" optimization brings additional 2% speedup to Bochs code
|
2008-03-22 21:29:41 +00:00 |
|
Stanislav Shwartsman
|
7e490699d4
|
Removing hooks for not-implemented SSE4A from the Bochs code.
|
2008-03-21 20:04:42 +00:00 |
|
Stanislav Shwartsman
|
933bf018a8
|
Fixed hang in sse_move.cc
|
2008-02-13 23:12:35 +00:00 |
|
Stanislav Shwartsman
|
ae86ad28a0
|
Finalize XSAVE/XRSTOR instructions
|
2008-02-13 22:25:24 +00:00 |
|
Stanislav Shwartsman
|
b929a2b2b8
|
Fixed minor issues - compilation and not only
|
2008-02-13 17:06:44 +00:00 |
|
Stanislav Shwartsman
|
457152334e
|
step2 in XSAVE implementation
|
2008-02-13 16:45:21 +00:00 |
|
Stanislav Shwartsman
|
a2897933a3
|
white space cleanup
|
2008-02-02 21:46:54 +00:00 |
|
Stanislav Shwartsman
|
37fbb82baa
|
Cleanups. Move bxInstruction_c definition to separate file instr.h
|
2008-01-29 17:13:10 +00:00 |
|
Stanislav Shwartsman
|
d9984bb3a1
|
Eliminate BxResolve call from the heart of cpu loop and move into instructions that really require this calculation. Yes, it blows the code of EVERY CPU method but it has >15% speedup !
|
2008-01-10 19:37:56 +00:00 |
|
Stanislav Shwartsman
|
79fc57dec8
|
Fixed more VCPP2008 warnings
|
2007-12-26 23:07:44 +00:00 |
|
Stanislav Shwartsman
|
e4420d52c6
|
Emplement MASMOVDQU as RMW for efficiency (and correctness)
|
2007-12-23 17:39:10 +00:00 |
|
Stanislav Shwartsman
|
5d4e32b8da
|
Avoid pointer params for every read_virtual_* except 16-byte SSE and 10-byte x87 reads
|
2007-12-20 20:58:38 +00:00 |
|
Stanislav Shwartsman
|
b516589e4e
|
Changes in write_virtual_* and pop_* functions -> avoid moving parameteres by pointer
|
2007-12-20 18:29:42 +00:00 |
|
Stanislav Shwartsman
|
c9932e97eb
|
Fixes in resolve.cc -> reduce amount of resolve functions even more
|
2007-12-18 21:41:44 +00:00 |
|
Stanislav Shwartsman
|
1e843cb462
|
Decode SSE4A
Rework immediate bytes decoding to make it faster
|
2007-12-15 17:42:24 +00:00 |
|
Stanislav Shwartsman
|
7ca78b88e9
|
configure/compile changes + small optimizations
|
2007-12-01 16:45:17 +00:00 |
|
Stanislav Shwartsman
|
8cfd17202a
|
some simple SSE code optimizations
|
2007-11-27 22:12:45 +00:00 |
|
Stanislav Shwartsman
|
35c3791bb7
|
Correctly implement EFER.FFXSR feature
|
2007-11-25 20:52:40 +00:00 |
|
Stanislav Shwartsman
|
83f6eb6945
|
Changes copyrights for the files I wrote :)
Also split EqId G1 group for x86-64
|
2007-11-17 23:28:33 +00:00 |
|
Stanislav Shwartsman
|
d9e58bd598
|
split11b on opcode tables level - split almost eevery splittable instruction
will be continued
|
2007-11-17 12:44:10 +00:00 |
|
Stanislav Shwartsman
|
28a5c6741c
|
Fix SSE4 MOVNTDQA instruction - memory access must be always aligned
|
2007-10-20 17:03:33 +00:00 |
|
Stanislav Shwartsman
|
f6ed95785f
|
added cpu state param - for future use and for dbg info
started to move debugger to info bx_param interface -> info sse and info mmx commands modified
|
2007-10-11 18:12:00 +00:00 |
|
Stanislav Shwartsman
|
016660698e
|
just code cleanup, preparation for future
|
2007-08-31 18:09:34 +00:00 |
|
Stanislav Shwartsman
|
895891b673
|
Implemented #AC check under configure option
Fixes in misaligned SSE support
|
2007-07-31 20:25:52 +00:00 |
|
Stanislav Shwartsman
|
38d1f39c77
|
Converted CR0 bits to one register similar to CR4 - a bit slower but helps with other features implemntation
|
2007-07-09 15:16:14 +00:00 |
|
Stanislav Shwartsman
|
5189cfbf10
|
SSE4 support
|
2007-04-19 16:12:21 +00:00 |
|
Stanislav Shwartsman
|
26f08fdb2c
|
Change my e-mail to #SF one
|
2007-03-23 21:27:13 +00:00 |
|
Stanislav Shwartsman
|
1ec33ec518
|
Correctly #UD on aliased instructions when no SSE2 is configured
|
2007-03-22 22:51:41 +00:00 |
|
Stanislav Shwartsman
|
c24627c00f
|
Implemented CLFLUSH instruction
Set of minor fixes for correctness
|
2007-01-28 21:27:31 +00:00 |
|
Stanislav Shwartsman
|
8221fa6838
|
- Fixed zero upper 32-bit part of GPR in x86-64 mode
- CMOV_GdEd should zero upper 32-bit part of GPR register even if the
'cmov' condition was false !
|
2007-01-26 22:12:05 +00:00 |
|
Stanislav Shwartsman
|
fdac9efa9b
|
Fixed ton of code duplication.
Do not save/restore XMM8-XMM15 not in 64-bit mode
|
2006-08-31 18:18:17 +00:00 |
|
Stanislav Shwartsman
|
bb1116e569
|
Fixed bx_cpu_c::MOVD_EdVd () always UDs
reported in mailing list
|
2006-04-27 06:09:56 +00:00 |
|
Stanislav Shwartsman
|
cc29f3d94b
|
Remove duplicate ';'
|
2006-04-23 16:03:46 +00:00 |
|