2001-10-03 17:10:38 +04:00
|
|
|
/////////////////////////////////////////////////////////////////////////
|
2004-05-11 01:05:51 +04:00
|
|
|
// $Id: ctrl_xfer32.cc,v 1.29 2004-05-10 21:05:47 sshwarts Exp $
|
2001-10-03 17:10:38 +04:00
|
|
|
/////////////////////////////////////////////////////////////////////////
|
|
|
|
//
|
2001-04-10 06:20:02 +04:00
|
|
|
// Copyright (C) 2001 MandrakeSoft S.A.
|
2001-04-10 05:04:59 +04:00
|
|
|
//
|
|
|
|
// MandrakeSoft S.A.
|
|
|
|
// 43, rue d'Aboukir
|
|
|
|
// 75002 Paris - France
|
|
|
|
// http://www.linux-mandrake.com/
|
|
|
|
// http://www.mandrakesoft.com/
|
|
|
|
//
|
|
|
|
// This library is free software; you can redistribute it and/or
|
|
|
|
// modify it under the terms of the GNU Lesser General Public
|
|
|
|
// License as published by the Free Software Foundation; either
|
|
|
|
// version 2 of the License, or (at your option) any later version.
|
|
|
|
//
|
|
|
|
// This library is distributed in the hope that it will be useful,
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
// Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public
|
|
|
|
// License along with this library; if not, write to the Free Software
|
|
|
|
// Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
|
|
|
|
|
|
|
|
|
|
|
|
2001-05-24 22:46:34 +04:00
|
|
|
#define NEED_CPU_REG_SHORTCUTS 1
|
2001-04-10 05:04:59 +04:00
|
|
|
#include "bochs.h"
|
merge in BRANCH-io-cleanup.
To see the commit logs for this use either cvsweb or
cvs update -r BRANCH-io-cleanup and then 'cvs log' the various files.
In general this provides a generic interface for logging.
logfunctions:: is a class that is inherited by some classes, and also
. allocated as a standalone global called 'genlog'. All logging uses
. one of the ::info(), ::error(), ::ldebug(), ::panic() methods of this
. class through 'BX_INFO(), BX_ERROR(), BX_DEBUG(), BX_PANIC()' macros
. respectively.
.
. An example usage:
. BX_INFO(("Hello, World!\n"));
iofunctions:: is a class that is allocated once by default, and assigned
as the iofunction of each logfunctions instance. It is this class that
maintains the file descriptor and other output related code, at this
point using vfprintf(). At some future point, someone may choose to
write a gui 'console' for bochs to which messages would be redirected
simply by assigning a different iofunction class to the various logfunctions
objects.
More cleanup is coming, but this works for now. If you want to see alot
of debugging output, in main.cc, change onoff[LOGLEV_DEBUG]=0 to =1.
Comments, bugs, flames, to me: todd@fries.net
2001-05-15 18:49:57 +04:00
|
|
|
#define LOG_THIS BX_CPU_THIS_PTR
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::RETnear32_Iw(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("RETnear32_Iw");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit16u imm16;
|
|
|
|
Bit32u temp_ESP;
|
|
|
|
Bit32u return_EIP;
|
|
|
|
|
2003-05-11 02:25:55 +04:00
|
|
|
//invalidate_prefetch_q();
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_ret;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b) /* 32bit stack */
|
|
|
|
temp_ESP = ESP;
|
|
|
|
else
|
|
|
|
temp_ESP = SP;
|
|
|
|
|
2002-09-18 02:50:53 +04:00
|
|
|
imm16 = i->Iw();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
if (protected_mode()) {
|
|
|
|
if ( !can_pop(4) ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("retnear_iw: can't pop EIP"));
|
2001-04-10 05:04:59 +04:00
|
|
|
/* ??? #SS(0) -or #GP(0) */
|
|
|
|
}
|
|
|
|
|
|
|
|
access_linear(BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.base + temp_ESP + 0,
|
|
|
|
4, CPL==3, BX_READ, &return_EIP);
|
|
|
|
|
|
|
|
if (protected_mode() &&
|
|
|
|
(return_EIP > BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled) ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_DEBUG(("retnear_iw: EIP > limit"));
|
merge in BRANCH-io-cleanup.
To see the commit logs for this use either cvsweb or
cvs update -r BRANCH-io-cleanup and then 'cvs log' the various files.
In general this provides a generic interface for logging.
logfunctions:: is a class that is inherited by some classes, and also
. allocated as a standalone global called 'genlog'. All logging uses
. one of the ::info(), ::error(), ::ldebug(), ::panic() methods of this
. class through 'BX_INFO(), BX_ERROR(), BX_DEBUG(), BX_PANIC()' macros
. respectively.
.
. An example usage:
. BX_INFO(("Hello, World!\n"));
iofunctions:: is a class that is allocated once by default, and assigned
as the iofunction of each logfunctions instance. It is this class that
maintains the file descriptor and other output related code, at this
point using vfprintf(). At some future point, someone may choose to
write a gui 'console' for bochs to which messages would be redirected
simply by assigning a different iofunction class to the various logfunctions
objects.
More cleanup is coming, but this works for now. If you want to see alot
of debugging output, in main.cc, change onoff[LOGLEV_DEBUG]=0 to =1.
Comments, bugs, flames, to me: todd@fries.net
2001-05-15 18:49:57 +04:00
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Pentium book says imm16 is number of words ??? */
|
|
|
|
if ( !can_pop(4 + imm16) ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("retnear_iw: can't release bytes from stack"));
|
2001-04-10 05:04:59 +04:00
|
|
|
/* #GP(0) -or #SS(0) ??? */
|
|
|
|
}
|
|
|
|
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = return_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b) /* 32bit stack */
|
|
|
|
ESP += 4 + imm16; /* ??? should it be 2*imm16 ? */
|
|
|
|
else
|
|
|
|
SP += 4 + imm16;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
pop_32(&return_EIP);
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = return_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b) /* 32bit stack */
|
|
|
|
ESP += imm16; /* ??? should it be 2*imm16 ? */
|
|
|
|
else
|
|
|
|
SP += imm16;
|
|
|
|
}
|
|
|
|
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_UCNEAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_RET, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::RETnear32(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("RETnear32");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u temp_ESP;
|
|
|
|
Bit32u return_EIP;
|
|
|
|
|
2003-05-11 02:25:55 +04:00
|
|
|
//invalidate_prefetch_q();
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_ret;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b) /* 32bit stack */
|
|
|
|
temp_ESP = ESP;
|
|
|
|
else
|
|
|
|
temp_ESP = SP;
|
|
|
|
|
|
|
|
|
|
|
|
if (protected_mode()) {
|
|
|
|
if ( !can_pop(4) ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("retnear: can't pop EIP"));
|
2001-04-10 05:04:59 +04:00
|
|
|
/* ??? #SS(0) -or #GP(0) */
|
|
|
|
}
|
|
|
|
|
|
|
|
access_linear(BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.base + temp_ESP + 0,
|
|
|
|
4, CPL==3, BX_READ, &return_EIP);
|
|
|
|
|
|
|
|
if ( return_EIP > BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("retnear: EIP > limit"));
|
2001-04-10 05:04:59 +04:00
|
|
|
//exception(BX_GP_EXCEPTION, 0, 0);
|
|
|
|
}
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = return_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b) /* 32bit stack */
|
|
|
|
ESP += 4;
|
|
|
|
else
|
|
|
|
SP += 4;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
pop_32(&return_EIP);
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = return_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_UCNEAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_RET, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::RETfar32_Iw(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("RETfar32_Iw");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u eip, ecs_raw;
|
|
|
|
Bit16s imm16;
|
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_ret;
|
|
|
|
#endif
|
|
|
|
/* ??? is imm16, number of bytes/words depending on operandsize ? */
|
|
|
|
|
2002-09-18 02:50:53 +04:00
|
|
|
imm16 = i->Iw();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (protected_mode()) {
|
|
|
|
BX_CPU_THIS_PTR return_protected(i, imm16);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
|
|
pop_32(&eip);
|
|
|
|
pop_32(&ecs_raw);
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = eip;
|
2001-04-10 05:04:59 +04:00
|
|
|
load_seg_reg(&BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS], (Bit16u) ecs_raw);
|
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b)
|
|
|
|
ESP += imm16;
|
|
|
|
else
|
|
|
|
SP += imm16;
|
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_RET,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::RETfar32(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("RETfar32");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u eip, ecs_raw;
|
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_ret;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if ( protected_mode() ) {
|
|
|
|
BX_CPU_THIS_PTR return_protected(i, 0);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
|
|
pop_32(&eip);
|
|
|
|
pop_32(&ecs_raw); /* 32bit pop, MSW discarded */
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = eip;
|
2001-04-10 05:04:59 +04:00
|
|
|
load_seg_reg(&BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS], (Bit16u) ecs_raw);
|
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_RET,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::CALL_Ad(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("CALL_Ad");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u new_EIP;
|
|
|
|
Bit32s disp32;
|
|
|
|
|
2003-05-11 02:25:55 +04:00
|
|
|
//invalidate_prefetch_q();
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_call;
|
|
|
|
#endif
|
|
|
|
|
2002-09-18 02:50:53 +04:00
|
|
|
disp32 = i->Id();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
new_EIP = EIP + disp32;
|
|
|
|
|
|
|
|
if ( protected_mode() ) {
|
|
|
|
if ( new_EIP > BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("call_av: offset outside of CS limits"));
|
2001-04-10 05:04:59 +04:00
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* push 32 bit EA of next instruction */
|
2002-09-13 04:15:23 +04:00
|
|
|
push_32(EIP);
|
|
|
|
EIP = new_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_UCNEAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_CALL, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::CALL32_Ap(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("CALL32_Ap");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit16u cs_raw;
|
|
|
|
Bit32u disp32;
|
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_call;
|
|
|
|
#endif
|
|
|
|
|
2002-09-18 02:50:53 +04:00
|
|
|
disp32 = i->Id();
|
|
|
|
cs_raw = i->Iw2();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
if (protected_mode()) {
|
|
|
|
BX_CPU_THIS_PTR call_protected(i, cs_raw, disp32);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
push_32(BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value);
|
2002-09-13 04:15:23 +04:00
|
|
|
push_32(EIP);
|
|
|
|
EIP = disp32;
|
2001-04-10 05:04:59 +04:00
|
|
|
load_seg_reg(&BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS], cs_raw);
|
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_CALL,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::CALL_Ed(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("CALL_Ed");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u temp_ESP;
|
|
|
|
Bit32u op1_32;
|
|
|
|
|
2003-05-11 02:25:55 +04:00
|
|
|
//invalidate_prefetch_q();
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_call;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (BX_CPU_THIS_PTR sregs[BX_SEG_REG_SS].cache.u.segment.d_b)
|
|
|
|
temp_ESP = ESP;
|
|
|
|
else
|
|
|
|
temp_ESP = SP;
|
|
|
|
|
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
if (i->modC0()) {
|
|
|
|
op1_32 = BX_READ_32BIT_REG(i->rm());
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
read_virtual_dword(i->seg(), RMAddr(i), &op1_32);
|
|
|
|
}
|
2001-04-10 05:04:59 +04:00
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
if (protected_mode()) {
|
|
|
|
if (op1_32 >
|
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled) {
|
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
}
|
2001-04-10 05:04:59 +04:00
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
push_32(EIP);
|
|
|
|
EIP = op1_32;
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_UCNEAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_CALL, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::CALL32_Ep(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("CALL32_Ep");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit16u cs_raw;
|
|
|
|
Bit32u op1_32;
|
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_call;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* op1_32 is a register or memory reference */
|
2002-09-20 07:52:59 +04:00
|
|
|
if (i->modC0()) {
|
2004-05-11 01:05:51 +04:00
|
|
|
BX_INFO(("CALL_Ep: op1 is a register"));
|
|
|
|
exception(BX_UD_EXCEPTION, 0, 0);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* pointer, segment address pair */
|
2002-09-18 09:36:48 +04:00
|
|
|
read_virtual_dword(i->seg(), RMAddr(i), &op1_32);
|
|
|
|
read_virtual_word(i->seg(), RMAddr(i)+4, &cs_raw);
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
if ( protected_mode() ) {
|
|
|
|
BX_CPU_THIS_PTR call_protected(i, cs_raw, op1_32);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
push_32(BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value);
|
2002-09-13 04:15:23 +04:00
|
|
|
push_32(EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = op1_32;
|
2001-04-10 05:04:59 +04:00
|
|
|
load_seg_reg(&BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS], cs_raw);
|
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_CALL,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::JMP_Jd(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JMP_Jd");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u new_EIP;
|
|
|
|
|
2003-05-11 02:25:55 +04:00
|
|
|
//invalidate_prefetch_q();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2002-09-18 02:50:53 +04:00
|
|
|
new_EIP = EIP + (Bit32s) i->Id();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (protected_mode()) {
|
|
|
|
if ( new_EIP > BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("jmp_jv: offset outside of CS limits"));
|
2001-04-10 05:04:59 +04:00
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = new_EIP;
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_UCNEAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_JMP, new_EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::JCC_Jd(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
2002-10-25 15:44:41 +04:00
|
|
|
bx_bool condition;
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2002-09-18 12:00:43 +04:00
|
|
|
switch (i->b1() & 0x0f) {
|
2001-04-10 05:04:59 +04:00
|
|
|
case 0x00: /* JO */ condition = get_OF(); break;
|
|
|
|
case 0x01: /* JNO */ condition = !get_OF(); break;
|
|
|
|
case 0x02: /* JB */ condition = get_CF(); break;
|
|
|
|
case 0x03: /* JNB */ condition = !get_CF(); break;
|
|
|
|
case 0x04: /* JZ */ condition = get_ZF(); break;
|
|
|
|
case 0x05: /* JNZ */ condition = !get_ZF(); break;
|
|
|
|
case 0x06: /* JBE */ condition = get_CF() || get_ZF(); break;
|
|
|
|
case 0x07: /* JNBE */ condition = !get_CF() && !get_ZF(); break;
|
|
|
|
case 0x08: /* JS */ condition = get_SF(); break;
|
|
|
|
case 0x09: /* JNS */ condition = !get_SF(); break;
|
|
|
|
case 0x0A: /* JP */ condition = get_PF(); break;
|
|
|
|
case 0x0B: /* JNP */ condition = !get_PF(); break;
|
2002-09-22 22:22:24 +04:00
|
|
|
case 0x0C: /* JL */ condition = getB_SF() != getB_OF(); break;
|
|
|
|
case 0x0D: /* JNL */ condition = getB_SF() == getB_OF(); break;
|
|
|
|
case 0x0E: /* JLE */ condition = get_ZF() || (getB_SF() != getB_OF());
|
2001-04-10 05:04:59 +04:00
|
|
|
break;
|
2002-09-22 22:22:24 +04:00
|
|
|
case 0x0F: /* JNLE */ condition = (getB_SF() == getB_OF()) &&
|
2001-04-10 05:04:59 +04:00
|
|
|
!get_ZF();
|
|
|
|
break;
|
2002-09-22 05:52:21 +04:00
|
|
|
default:
|
|
|
|
condition = 0; // For compiler...all targets should set condition.
|
|
|
|
break;
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (condition) {
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JCC_Jd");
|
2002-09-27 11:01:02 +04:00
|
|
|
Bit32u new_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2002-09-27 11:01:02 +04:00
|
|
|
new_EIP = EIP + (Bit32s) i->Id();
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_CPU_LEVEL >= 2
|
2002-09-27 11:01:02 +04:00
|
|
|
if (protected_mode()) {
|
|
|
|
if ( new_EIP >
|
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled )
|
|
|
|
{
|
|
|
|
BX_PANIC(("jo_routine: offset outside of CS limits"));
|
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
2002-09-22 05:52:21 +04:00
|
|
|
}
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
}
|
2002-09-27 11:01:02 +04:00
|
|
|
#endif
|
|
|
|
EIP = new_EIP;
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_CNEAR_BRANCH_TAKEN(BX_CPU_ID, new_EIP);
|
2002-09-22 05:52:21 +04:00
|
|
|
revalidate_prefetch_q();
|
|
|
|
}
|
|
|
|
#if BX_INSTRUMENTATION
|
|
|
|
else {
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_CNEAR_BRANCH_NOT_TAKEN(BX_CPU_ID);
|
2002-09-22 05:52:21 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
BX_CPU_C::JZ_Jd(bxInstruction_c *i)
|
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JZ_Jd");
|
2002-09-22 05:52:21 +04:00
|
|
|
if (get_ZF()) {
|
|
|
|
Bit32u new_EIP;
|
|
|
|
|
|
|
|
new_EIP = EIP + (Bit32s) i->Id();
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (protected_mode()) {
|
|
|
|
if ( new_EIP >
|
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled ) {
|
|
|
|
BX_PANIC(("jo_routine: offset outside of CS limits"));
|
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
EIP = new_EIP;
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_CNEAR_BRANCH_TAKEN(BX_CPU_ID, new_EIP);
|
2002-09-22 05:52:21 +04:00
|
|
|
revalidate_prefetch_q();
|
|
|
|
}
|
|
|
|
#if BX_INSTRUMENTATION
|
|
|
|
else {
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_CNEAR_BRANCH_NOT_TAKEN(BX_CPU_ID);
|
2002-09-22 05:52:21 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
BX_CPU_C::JNZ_Jd(bxInstruction_c *i)
|
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JNZ_Jd");
|
2002-09-22 05:52:21 +04:00
|
|
|
if (!get_ZF()) {
|
|
|
|
Bit32u new_EIP;
|
|
|
|
|
|
|
|
new_EIP = EIP + (Bit32s) i->Id();
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (protected_mode()) {
|
|
|
|
if ( new_EIP >
|
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled ) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("jo_routine: offset outside of CS limits"));
|
2001-04-10 05:04:59 +04:00
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
EIP = new_EIP;
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_CNEAR_BRANCH_TAKEN(BX_CPU_ID, new_EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
revalidate_prefetch_q();
|
|
|
|
}
|
|
|
|
#if BX_INSTRUMENTATION
|
|
|
|
else {
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_CNEAR_BRANCH_NOT_TAKEN(BX_CPU_ID);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::JMP_Ap(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JMP_Ap");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u disp32;
|
|
|
|
Bit16u cs_raw;
|
|
|
|
|
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2002-09-18 09:36:48 +04:00
|
|
|
if (i->os32L()) {
|
2002-09-18 02:50:53 +04:00
|
|
|
disp32 = i->Id();
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
else {
|
2002-09-18 02:50:53 +04:00
|
|
|
disp32 = i->Iw();
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
2002-09-18 02:50:53 +04:00
|
|
|
cs_raw = i->Iw2();
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (protected_mode()) {
|
|
|
|
BX_CPU_THIS_PTR jump_protected(i, cs_raw, disp32);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
load_seg_reg(&BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS], cs_raw);
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = disp32;
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_JMP,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::JMP_Ed(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JMP_Ed");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit32u new_EIP;
|
|
|
|
Bit32u op1_32;
|
|
|
|
|
2003-05-11 02:25:55 +04:00
|
|
|
//invalidate_prefetch_q();
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
/* op1_32 is a register or memory reference */
|
2002-09-20 07:52:59 +04:00
|
|
|
if (i->modC0()) {
|
2002-09-18 02:50:53 +04:00
|
|
|
op1_32 = BX_READ_32BIT_REG(i->rm());
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
else {
|
|
|
|
/* pointer, segment address pair */
|
2002-09-18 09:36:48 +04:00
|
|
|
read_virtual_dword(i->seg(), RMAddr(i), &op1_32);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
new_EIP = op1_32;
|
|
|
|
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (protected_mode()) {
|
|
|
|
if (new_EIP > BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].cache.u.segment.limit_scaled) {
|
2001-05-30 22:56:02 +04:00
|
|
|
BX_PANIC(("jmp_ev: IP out of CS limits!"));
|
2001-04-10 05:04:59 +04:00
|
|
|
exception(BX_GP_EXCEPTION, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = new_EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_UCNEAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_JMP, new_EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Far indirect jump */
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::JMP32_Ep(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("JMP32_Ep");
|
2001-04-10 05:04:59 +04:00
|
|
|
Bit16u cs_raw;
|
|
|
|
Bit32u op1_32;
|
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
/* op1_32 is a register or memory reference */
|
2002-09-20 07:52:59 +04:00
|
|
|
if (i->modC0()) {
|
2001-04-10 05:04:59 +04:00
|
|
|
/* far indirect must specify a memory address */
|
2004-05-11 01:05:51 +04:00
|
|
|
BX_INFO(("JMP_Ep(): op1 is a register"));
|
|
|
|
exception(BX_UD_EXCEPTION, 0, 0);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* pointer, segment address pair */
|
2002-09-18 09:36:48 +04:00
|
|
|
read_virtual_dword(i->seg(), RMAddr(i), &op1_32);
|
|
|
|
read_virtual_word(i->seg(), RMAddr(i)+4, &cs_raw);
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
if ( protected_mode() ) {
|
|
|
|
BX_CPU_THIS_PTR jump_protected(i, cs_raw, op1_32);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2002-09-13 04:15:23 +04:00
|
|
|
EIP = op1_32;
|
2001-04-10 05:04:59 +04:00
|
|
|
load_seg_reg(&BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS], cs_raw);
|
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_JMP,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-09-18 02:50:53 +04:00
|
|
|
BX_CPU_C::IRET32(bxInstruction_c *i)
|
2001-04-10 05:04:59 +04:00
|
|
|
{
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
BailBigRSP("IRET32");
|
2001-04-10 05:04:59 +04:00
|
|
|
|
I integrated my hacks to get Linux/x86-64 booting. To keep
these from interfering from a normal compile here's what I did.
In config.h.in (which will generate config.h after a configure),
I added a #define called KPL64Hacks:
#define KPL64Hacks
*After* running configure, you must set this by hand. It will
default to off, so you won't get my hacks in a normal compile.
This will go away soon. There is also a macro just after that
called BailBigRSP(). You don't need to enabled that, but you
can. In many of the instructions which seemed like they could
be hit by the fetchdecode64() process, but which also touched
EIP/ESP, I inserted a macro. Usually this macro expands to nothing.
If you like, you can enabled it, and it will panic if it finds
the upper bits of RIP/RSP set. This helped me find bugs.
Also, I cleaned up the emulation in ctrl_xfer{8,16,32}.cc.
There were some really old legacy code snippets which directly
accessed operands on the stack with access_linear. Lots of
ugly code instead of just pop_32() etc. Cleaning those up,
minimized the number of instructions which directly manipulate
the stack pointer, which should help in refining 64-bit support.
2002-09-24 04:44:56 +04:00
|
|
|
invalidate_prefetch_q();
|
|
|
|
|
2001-04-10 05:04:59 +04:00
|
|
|
#if BX_DEBUGGER
|
|
|
|
BX_CPU_THIS_PTR show_flag |= Flag_iret;
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR show_eip = EIP;
|
2001-04-10 05:04:59 +04:00
|
|
|
#endif
|
|
|
|
|
|
|
|
if (v8086_mode()) {
|
|
|
|
// IOPL check in stack_return_from_v86()
|
|
|
|
stack_return_from_v86(i);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if BX_CPU_LEVEL >= 2
|
|
|
|
if (BX_CPU_THIS_PTR cr0.pe) {
|
|
|
|
iret_protected(i);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
#endif
|
2003-08-17 22:15:04 +04:00
|
|
|
#if BX_CPU_LEVEL >= 5
|
|
|
|
/*this feature discribed in pentium docs*/
|
|
|
|
if (iret32_real(i))
|
|
|
|
goto done;
|
|
|
|
#endif
|
2002-09-16 21:00:16 +04:00
|
|
|
|
2003-08-17 22:15:04 +04:00
|
|
|
BX_ERROR(("IRET32 may not be implemented right."));
|
2002-09-16 21:00:16 +04:00
|
|
|
BX_PANIC(("Please report that you have found a test case for BX_CPU_C::IRET32."));
|
2001-04-10 05:04:59 +04:00
|
|
|
|
|
|
|
done:
|
2003-02-13 18:04:11 +03:00
|
|
|
BX_INSTR_FAR_BRANCH(BX_CPU_ID, BX_INSTR_IS_IRET,
|
2002-09-13 04:15:23 +04:00
|
|
|
BX_CPU_THIS_PTR sregs[BX_SEG_REG_CS].selector.value, EIP);
|
2001-04-10 05:04:59 +04:00
|
|
|
}
|