Unallowed cases are now handled in devices.cc and cause a BX_ERROR.
- io_len mask fixed and unnecessary io_len checks removed in
* devices.cc
* extfpuirq.cc
* gameport.cc
* ne2k.cc
* pit.cc
* pit_wrap.cc (i/o register function calls replaced by DEV_* macro calls)
- TODO: implement this in all other devices
* changed all %ll format descriptions to FMT_LL macro so that
Microsoft Visual C works correctly (it uses %I64)
* missing type conversions added
* cdrom.cc: variable types for win32 fixed
* removed some unused variables in eth_win32.cc and harddrv.cc
* added missing includes in make_cmos_image.c and niclist.c
It's not clear what the correct behavior is in that case, (we
clearly don't currently handle it correctly) so
simply avoiding it is the easiest thing to do. As such, this
option is ON by default.
- it works only on x86 with gcc2.95+
- uses the GCC function atribute "regparm(n)" to declare that certain
functions use the register calling convention
- performance improvement is about 6%
- moved ne2k presence check to devices.cc
- added special make rules for the ne2k and the lowlevel network support
- added macro for the debug feature of the ne2k
For a whole lot of configure options, I put #if...#endif around code that
is specific to the option, even in files which are normally only compiled
when the option is on. This allows me to create a MS Visual C++ 6.0
workspace that supports many of these options. The workspace will basically
compile every file all the time, but the code for disabled options will
be commented out by the #if...#endif.
This may one day lead to simplification of the Makefiles and configure
scripts, but for the moment I'm leaving Makefiles and configure scripts
alone.
Affected options:
BX_SUPPORT_APIC (cpu/apic.cc)
BX_SUPPORT_X86_64 (cpu/*64.cc)
BX_DEBUGGER (debug/*)
BX_DISASM (disasm/*)
BX_WITH_nameofgui (gui/*)
BX_SUPPORT_CDROM (iodev/cdrom.cc)
BX_NE2K_SUPPORT (iodev/eth*.cc, iodev/ne2k.cc)
BX_SUPPORT_APIC (iodev/ioapic.cc)
BX_IODEBUG_SUPPORT (iodev/iodebug.cc)
BX_PCI_SUPPORT (iodev/pci*.cc)
BX_SUPPORT_SB16 (iodev/sb*.cc)
Modified Files:
cpu/apic.cc cpu/arith64.cc cpu/ctrl_xfer64.cc
cpu/data_xfer64.cc cpu/fetchdecode64.cc cpu/logical64.cc
cpu/mult64.cc cpu/resolve64.cc cpu/shift64.cc cpu/stack64.cc
debug/Makefile.in debug/crc.cc debug/dbg_main.cc debug/lexer.l
debug/linux.cc debug/parser.c debug/parser.y
disasm/dis_decode.cc disasm/dis_groups.cc gui/amigaos.cc
gui/beos.cc gui/carbon.cc gui/macintosh.cc gui/rfb.cc
gui/sdl.cc gui/term.cc gui/win32.cc gui/wx.cc gui/wxdialog.cc
gui/wxmain.cc gui/x.cc iodev/cdrom.cc iodev/eth.cc
iodev/eth_arpback.cc iodev/eth_fbsd.cc iodev/eth_linux.cc
iodev/eth_null.cc iodev/eth_packetmaker.cc iodev/eth_tap.cc
iodev/eth_tuntap.cc iodev/eth_win32.cc iodev/ioapic.cc
iodev/iodebug.cc iodev/ne2k.cc iodev/pci.cc iodev/pci2isa.cc
iodev/sb16.cc iodev/soundlnx.cc iodev/soundwin.cc
if init() is called a second time. This allows me to restart a
simulation (wxwindows interface only) without restarting the whole
application.
- modified: iodev/*.cc
requesting source can be registered as well. Otherwise, there
is no way to know which source modules are requesting
suspect frequencies which are too high.
Some devices already had one. Some I had to add an empty one.
I did a little cleaning of init() methods to make them more uniform
but generally I left them alone.
- I also put these exact diffs into a patch "patch.iodev-add-reset"
in case I want to revert these changes for some reason, for example
if they break an old patch. It should be deleted after a while.
We should really be using #defines or enums to give these constants
a proper name! Thanks to Peter Tattam <peter@jazz-1.trumpet.com.au>
for the bug report.
- new functions raise_irq() and lower_irq()
- all trigger_irq() / untrigger_irq() calls are replaced by the new functions
- REMARK: timer IRQ handling is not correct but it works
- TODO: IOAPIC IRQ handling needs to be changed
I compiled Bochs on Linux and installed a linux
in it, but when I ping a machine on my LAN, I get
packet loss. Sometimes as much as 70% is lost.
So I read ne2k.cc, Linux 8390 driver and 8390 chip
specification. I find that 8390 command register START
bit is misused in ne2k.cc. According to the chip
specification, even if START=0, the chip does not stop
working.
appeared in the guest OS. Full description:
> After much grovelling through the 8390 docs, I think this is the
> correct answer to the odd-length packet problem I was having with
> the ne2k driver under Linux.
>
> According to the datasheet, the 8390 always accesses its buffer
> memory in word-size chunks if the WTS bit of the DCR is set. So
> it will always send a word to the host bus interface if WTS==1.
> It's up to the host bus interface to deliver the the number of
> requested bytes to the host. So disallowing a byte read when the
> WTS bit is set is wrong (IMO) as the bus interface may allow it,
> as the NE2000 appears to.
>
> The patch to ne2k.h bumps the receive buffer memory size to 32K.
> This fixes the "out-of-bounds chipmem read" errors I was getting.
>
> Can someone with an NE2K datasheet verify these changes? They
> jibe with the Linux ne.c driver, anyway.