Since ICI arguments are used to send addresses in some places, uint32 is
not sufficient on x86_64. addr_t still refers to the same type as uint32
(unsigned long) on other platforms, so this change only really affects
x86_64.
* x86_64 is using the existing *_ia32 boot platforms.
* Special flags are required when compiling the loader to get GCC to compile
32-bit code. This adds a new set of rules for compiling boot code rather
than using the kernel rules, which compile using the necessary flags.
* Some x86_64 private headers have been stubbed by #include'ing the x86
versions. These will be replaced later.
* gPeripheralBase keeps track of the device
peripherals before and after mmu_init
* Add ability to disable mmu for troubleshooting
* Remove static FB_BASE, we actually don't know
where the FB is yet. (depends on firmware used)
* BCM2708 defines no longer assume 0x20 address
We will be throwing away the blob memory mapping
and using our own.
* Use existing blob mapping to turn GPIO led on pre mmu_init
* Remap MMU hardware addresses from 0x7E. We could map each device,
however the kernel will throw away the mappings again anyway. For
now we just map the whole range and use offsets.
* Serial uart no longer works, however at least
we know why now :). Serial driver now needs to
use mapped address.
* Use U-Boot mmu code as base
* This will be factored out someday into common arch mmu
code when we can read Flattened Device Trees
* Move mmu_init after serial_init.
Temporary change as we will want serial_init to use
memory mapped addresses... for debugging.
* introduce a DebugUART baseclass,
* rework 8250 and PL011 implementations from kallisti5 to inherit DebutUART,
* each arch should override the IO methods to access registers.
* on ARM registers are 32bit-aligned.
* U-Boot still works for the verdex target.
* rPi still compiles, needs testing.
* Still some more consolidation needed to allow runtime choice of the UART type (as read from FDT blobs for ex.).
* serial.cpp should probably mostly be made generic as well.
* didn't touch x86 or ppc yet.
* Enable/Disable makes more sense and matches
platform loader serial functions.
* Rework PL011 code after finding a PDF covering
the details of it.
* Rename UART global defines in loader to be more
exact about location
* This makes things a little more flexible and
the interface to use the uarts cleaner.
* May want to make a generic Uart wrapper
class in uart.h / uart.cpp and call drivers
as needed from there.
* Avoid name collisions
* This uart stuff may work better as a class at
some point, however I didn't want to rock the
u-boot boat *too* much as I don't have the
hardware to test.
* Add nested function wrappers to allow usage of other
uart drivers depending on board. We may want to use this
on other platforms at some point (haha, maybe)
* Make Kernel ARM UART slightly more generic
through (BOARD_UART_CLOCK) configured per board
* Add initial Raspberry Pi serial code
* Still rough and non-working
AMD C1E is a BIOS controlled C3 state. Certain processors families
may cut off TSC and the lapic timer when it is in a deep C state,
including C1E state, thus the cpu can't be waken up and system will hang.
This patch firstly adds the support of idle selection during boot. Then
it implements amdc1e_noarat_idle() routine which checks the MSR which
contains the C1eOnCmpHalt (bit 28) and SmiOnCmpHalt (bit 27) before
executing the halt instruction, then clear them once set.
However intel C1E doesn't has such problem. AMD C1E is a BIOS controlled
C3 state. The difference between C1E and C3 is that transition into C1E
is not initiated by the operating system. System will enter C1E state
automatically when both cores enters C1 state. As for intel C1E, it
means "reduce CPU voltage before entering corresponding Cx-state".
This patch may fix#8111, #3999, #7562, #7940 and #8060
Copied from the description of #3999:
>but for some reason I hit the power button instead of the reset one. And
>the boot continued!!
The reason is CPUs are waken up once power button is hit.
Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
* Prepend x86_ to non-static x86 code
* Add x86_init_fpu function to kernel header
* Don't init fpu multiple times on smp systems
* Verified fpu is still started on smp and non-smp
* SSE code still generates general protection faults
on smp systems though
* Rename init_sse to init_fpu and handle FPU setup.
* Stop trying to set up FPU before VM init.
We tried to set up the FPU before VM init, then
set it up again after VM init with SSE extensions,
this caused SSE and MMX applications to crash.
* Be more logical in FPU setup by detecting CPU flag prior
to enabling FPU. (it's unlikely Haiku will run on
a processor without a fpu... but lets be consistant)
* SSE2 gcc code now runs (faster even) without GPF
* tqh confirms his previously crashing mmx code now works
* The non-SSE FPU enable after VM init needs tested!
This allows to use the debug features of the guarded heap also on
allocations made through the object cache API. This is obivously
horrible for performance and uses up huge amounts of memory, so the
initial and grow sizes are adjusted accordingly.
Note that this is a rather simple hack, using the object_cache pointer
to transport the allocation size. The alignment is neglected completely.
This adds a pair of functions vm_prepare_kernel_area_debug_protection()
and vm_set_kernel_area_debug_protection() to set a kernel area up for
page wise protection and to actually protect individual pages
respectively.
It was already possible to read and write protect full areas via area
protection flags and not mapping any actual pages. For areas that
actually have mapped pages this doesn't work however as no fault, at
which the permissions could be checked, is generated on access.
These new functions use the debug helpers of the translation map to mark
individual pages as non-present without unmapping them. This allows them
to be "protected", i.e. causing a fault on read and write access. As they
aren't actually unmapped they can later be marked present again.
Note that these are debug helpers and have quite a few restrictions as
described in the comment above the function and is only useful for some
very specific and constrained use cases.
They can be used to mark pages as present/non-present without actually
unmapping them. Marking pages as non-present causes every access to
fault. We can use that for debugging as it allows us to "read protect"
individual kernel pages.
* The vm86 code or the code running in virtual 8086 mode may clobber the
%fs register that we use for the CPU dependent thread local storage
(TLS). Previously the vm86 code would simply restore %fs on exit, but
this doesn't always work. If the thread got unscheduled while running
in virtual 8086 mode and was then rescheduled on a different CPU, the
vm86 exit code would restore the %fs register with the TLS value of
the old CPU, causing anything using TLS in userland to crash later on.
Instead we skip the %fs register restore on exit (as do the other
interrupt return functions) and explicitly update the potentially
clobbered %fs by calling x86_set_tls_context(). This will repopulate
the %fs register with the TLS value for the right CPU. Fixes#8068.
* Made the static set_tls_context() into x86_set_tls_context() and made
it available to others to faciliate the above.
* Sync the vm86 specific interrupt code with the changes from hrev23370,
using the iframe pop macro to properly return. Previously what was
pushed in int_bottom wasn't poped on return.
* Account for the time update macro resetting the in_kernel flag and
reset it to 1, as we aren't actually returning to userland. This
didn't cause any harm though as only the time tracking is using that
flag so far.
* Some minor cleanup.
* AVLTreeMap::_GetKey(): Change return type from const Key& to Key, so
the strategy can do that as well and doesn't have have a Key object in
the node.
* Fix the Auto strategy: It was using the undefined _GetKey() instead
of GetKey().
both:
* Add Previous()/Next().
* Add Insert() version that returns a Node* instead of an Iterator.
* Add Remove() version that takes a Node* instead of a key.
TwoKeyAVLTree:
* Add GetIterator() version that takes an additional Node*, i.e.
initializing an iterator to point to the node.
* Add Iterator::CurrentNode().
This is a tree implementation with elements with primary and secondary
key. The code is a cleaned up version of ramfs's implementation. ramfs
doesn't use this version yet.
* Add support function vfs_get_mount_point(), so a file system can get
its own mount point (i.e. the node it covers). Re-added
fs_mount::covers_vnode for that purpose -- the root node isn't know to
the VFS before the mount() hook returns.
* Add function vfs_bind_mount_directory() which bind-mounts a directory
to another. The Vnode::covers/covered_by mechanism is used, so this
isn't true bind-mounting, but sufficient for what we need ATM and
cheaper as well. The vnodes connected thus aren't tracked yet, which
is needed for undoing the connection when unmounting.
* get_vnode_name(): Don't use dir_read() to read the directory. Since we
have already resolved vnode to the covered vnode, we don't want the
dirents to be "fixed" to refer to the covering nodes. Such a vnode
simply wouldn't be found.
* Introduce Vnode flags for covered and covering. Can be used as a quick
check when one doesn't already hold sVnodeLock.
* Rename resolve_mount_point_to_volume_root() to
resolve_vnode_to_covering_vnode().
* Adjust all code that deals with transitions between mount points and
volume root vnodes to generally support covered/covering vnodes.
CFE is used in the upcoming Amiga X-1000 dualcore PPC board.
* Largely inspired by the OF and U-Boot code.
* Still largely stubbed out.
* The loader builds but I don't have a machine to test it. Anyone interested?
of the slab code. It is generic as it only contains the link to a tracing entry
and not any application specific info.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43188 a95241bf-73f2-0310-859d-f6bbb57e9c96
While structs looked cleaner at first sight, it didn't really was any simpler.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43140 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Turn VMCache::consumers C list into a DoublyLinkedList.
* Use object caches for the different VMCache types and the VMCacheRefs.
The purpose is to reduce slab area fragmentation.
* Requires the introduction of a pure virtual VMCache::DeleteObject()
method, implemented in the derived classes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43133 a95241bf-73f2-0310-859d-f6bbb57e9c96
template function object_cache_delete() to be used to delete objects
constructed with it.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43132 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Introduce TracingMetaData::IsInBuffer() to validate that a certain memory
range is within the valid tracing buffer limits.
* Use that when validating in tracing_is_entry_valid() before trying to access
the entry, resolving a TODO.
* Validate the candidate time against the handed in time (if specified) as an
additional check.
* Tiny unrelated text cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43116 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Add TraceOutput::PrintArgs(), a va_list version of Print().
* Move code of TraceOutput::Print() to new private template function
print_stack_trace().
* Add public tracing_print_stack_trace().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43085 a95241bf-73f2-0310-859d-f6bbb57e9c96
Add helper macros for placing markers in the source, so we can get the
address ranges of code we're interested in.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43071 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Move struct tracing_stack_trace to tracing.h header.
* Add tracing_find_caller_in_stack_trace(). Helper function to get the
first return address of a stack trace that is not in one of the given
address ranges.
* Add AbstractTracingEntryWithStackTrace::StackTrace() getter.
* Add tracing_is_entry_valid(). Checks, based on the additionally given
time, whether a tracing entry is (probably) still in the tracing
buffer.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43070 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Introduce "paranoid" malloc/free into the slab allocator (initializing
allocated memory to 0xcc and setting freed memory to 0xdeadbeef).
* Allow for optional stack traces for slab object cache tracing.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43046 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Add an AbstractTraceEntryWithStackTrace that includes stack trace handling.
* Add a selector macro/template combo to conveniently select the right base
class depending on whether stack traces are enabled or not.
* Minor style cleanups.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43045 a95241bf-73f2-0310-859d-f6bbb57e9c96
Add a DoublyLinkedList::Contains() method to check if a list contains a certain
element.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43043 a95241bf-73f2-0310-859d-f6bbb57e9c96
be used to mark certain io interrupt vectors as reserved and to allocate from
the still free ones. It is a kernel private API for now though.
* Make the MSI code use that functionality instead of implementing its own which
slims it down considerably and also removes quite a bit of hardcoded knowledge
about the interrupt layout that didn't really belong there.
* Mark the various in-use interrupts as reserved from the components that
actually know about them (PIC, IO-APIC, SMP, APIC timer and interrupt setup).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42832 a95241bf-73f2-0310-859d-f6bbb57e9c96
to a panic at boot.
* Make the panic message more explicit when there is no more room left.
This should hopefully fix#7869.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42715 a95241bf-73f2-0310-859d-f6bbb57e9c96
directory of a file without traversing leaf links (just like lstat()).
* Minor cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42620 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Rename of_support.h/cpp back to support.cpp as per Axel
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42498 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Rename a few variables to make more sense
* OF_FAILED is a signed int.. fix return of of_address_cells
* OF_FAILED is a signed int.. fix return of of_size_cells
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42497 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Add header file to support of_support.cpp
* Add support functions to obtain address and size cell lengths
* Small style cleanups
* Add support for G5 PowerPC cpus...
* Refactor memory region code to be aware of 64-bit OF addresses.
As-is the boot loader wouldn't start on G5 systems because
OpenFirmware memory base addresses are stored as two 32-bit
unsigned int 'cells' vs one 32-bit unsigned int 'cell' on G3/G4.
I removed the static struct and replaced it with a template
and pass uint32 or uint64 depending on the address cell size.
Thanks for the idea DeadYak!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42486 a95241bf-73f2-0310-859d-f6bbb57e9c96
* AVLTreeMap::_GetKey(): Change return type from const Key& to Key, so
the strategy can do that as well and doesn't have have a Key object in
the node.
* Fix the Auto strategy: It was using the undefined _GetKey() instead
of GetKey().
both:
* Add Previous()/Next().
* Add Insert() version that returns a Node* instead of an Iterator.
* Add Remove() version that takes a Node* instead of a key.
TwoKeyAVLTree:
* Add GetIterator() version that takes an additional Node*, i.e.
initializing an iterator to point to the node.
* Add Iterator::CurrentNode().
This is a tree implementation with elements with primary and secondary
key. The code is a cleaned up version of ramfs's implementation. ramfs
doesn't use this version yet.
* Add support function vfs_get_mount_point(), so a file system can get
its own mount point (i.e. the node it covers). Re-added
fs_mount::covers_vnode for that purpose -- the root node isn't know to
the VFS before the mount() hook returns.
* Add function vfs_bind_mount_directory() which bind-mounts a directory
to another. The Vnode::covers/covered_by mechanism is used, so this
isn't true bind-mounting, but sufficient for what we need ATM and
cheaper as well. The vnodes connected thus aren't tracked yet, which
is needed for undoing the connection when unmounting.
* get_vnode_name(): Don't use dir_read() to read the directory. Since we
have already resolved vnode to the covered vnode, we don't want the
dirents to be "fixed" to refer to the covering nodes. Such a vnode
simply wouldn't be found.
* Introduce Vnode flags for covered and covering. Can be used as a quick
check when one doesn't already hold sVnodeLock.
* Rename resolve_mount_point_to_volume_root() to
resolve_vnode_to_covering_vnode().
* Adjust all code that deals with transitions between mount points and
volume root vnodes to generally support covered/covering vnodes.
* Add BOOT_VOLUME_PACKAGED boot volume message field name constant.
* register_boot_file_system():
- Now takes a BootVolume& parameter.
- If the boot volume is packaged, add that info to the boot volume
message.
* Add pread().
* Add Node::ReadLink() to read a symbolic link path.
* Add Directory::LookupDontTraverse() and make Lookup() non-abstract.
Lookup() is implemented via LookupDontTraverse() and Node::ReadLink().
* Adjust all FS implementations accordingly.
* Add a packagefs implementation. Unlike other FS implementations it
isn't a pseudo-module, but provides a function to explicitly mount a
package file (packagefs_mount_file()).
* Finish BootVolume::SetTo() implementation, mounting the package file
and replacing fSystemDirectory.
Now the boot loader can load the kernel and boot modules from a packaged
system. The kernel boots up to the point where the boot volume is
mounted.
BootVolume is initialized from a root directory of a volume. It finds
the system directory, and -- not implemented yet -- mounts the system
package, if the system is packaged, replacing the system directory with
it. Adjusted several functionality (main(), the loader functions,
user_menu()) to use BootVolume instead of the root directory.
exit info with some generic status.
* team_create_thread_start(), common_thread_entry(): Initializes the team's
exit info (if that's the main thread) before calling thread_exit(). Fixes
#7686.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42183 a95241bf-73f2-0310-859d-f6bbb57e9c96
partition_module_info::uninitialize().
* Implemented the hook for BFS.
* Implemented KFileSystem::Uninitialize().
Fixes failure to initialize a BFS initialized device with an intel partition
map.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42142 a95241bf-73f2-0310-859d-f6bbb57e9c96
destroy the partitioning system's on-disk structure.
* Adjusted the existing partitioning system implementations accordingly.
Actually implemented the hook for the intel partitioning system.
* Added Uninitialize() method to KDiskSystem and KPartitioningSystem. The latter
implements the method calling the new module hook.
* _user_uninitialize_partition(): Also let the disk system uninitialize the
on-disk structure.
This fixes the failure to initialize a disk device with BFS, when it contains a
valid partition map with at least one partition.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42140 a95241bf-73f2-0310-859d-f6bbb57e9c96
its size.
* Added "Display current boot loader log" item to the "Debug Options" boot
loader menu. It displays what the boot loader has logged so far. Might be
interesting for early boot issues when serial debugging is not possible.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42134 a95241bf-73f2-0310-859d-f6bbb57e9c96
* If we detect ACPI 2.0 or higher, the spec says we should use the XSDT rather
than the RSDT. Attempt to do so, falling back to the RSDT if the former fails
to be mapped/validated.
* Refactored acpi_find_table into a templated version to account for the fact
that the XSDT exports different pointer widths for its links to other tables
than the RSDT.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42133 a95241bf-73f2-0310-859d-f6bbb57e9c96
which don't wait for a character, but return -1 when no character is
available ATM. Implemented correctly for x86 only.
* Changed the semantics of the debugger_module_info::debugger_getchar() hook.
It is supposed to return immediately now.
* Adjusted usb_keyboard accordingly. Hacked UHCI's debug_process_transfer() to
achieve that. It does now start, check, or cancel a transfer. Split
UHCI::ProcessDebugTransfer() into StartDebugTransfer(), and
CheckDebugTransfer() accordingly, and also added a CancelDebugTransfer().
The latter seems to have issues. Michael, please have a look. I have no clue
what I'm doing. :-)
* Adjusted kgetc() to poll all possible inputs using the new
functions/semantics. This allows to use any input (USB, PS/2, serial) in KDL.
* Removed the no longer needed "serial_input" command.
* read_line(): Also support 0x7f as backspace code. That's what xterm sends.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42126 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Reorganized the kernel locking related to threads and teams.
* We now discriminate correctly between process and thread signals. Signal
handlers have been moved to teams. Fixes#5679.
* Implemented real-time signal support, including signal queuing, SA_SIGINFO
support, sigqueue(), sigwaitinfo(), sigtimedwait(), waitid(), and the addition
of the real-time signal range. Closes#1935 and #2695.
* Gave SIGBUS a separate signal number. Fixes#6704.
* Implemented <time.h> clock and timer support, and fixed/completed alarm() and
[set]itimer(). Closes#5682.
* Implemented support for thread cancellation. Closes#5686.
* Moved send_signal() from <signal.h> to <OS.h>. Fixes#7554.
* Lots over smaller more or less related changes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42116 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added an arch_debug_gdb_get_registers() interface that is supposed to provide
the register values in the format expected by gdb and implemented it for x86.
* Reimplemented gdb_regreply() to use that. Also made it buffer overflow safe.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41880 a95241bf-73f2-0310-859d-f6bbb57e9c96
menu item it's associated with rather than an input string. This allows it
to calculate the position to start the input at, as well as the correct
line to place it on. The previous solution always put the input at the
center line, which happened to be the right place by happy coincidence
unless one also had the menu items for viewing/saving the debug syslog
present.
* Implement input buffer scrolling, and consequently lift the previous size
limit on user input (it is now only limited by the size of the passed in
buffer).
* Implement parsing of the input buffer to allow it to handle comma-separated
options. Thus, one can now input things like "disable_smp true, serial_debug_output false"
and it will be handled properly.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41706 a95241bf-73f2-0310-859d-f6bbb57e9c96
aren't otherwise exposed via the safe mode menus. The option can be
found under the debug options menu, where additional settings can be
added one at a time with the same syntax used in kernel settings files
(i.e. disable_acpi on).
Scrolling of the input buffer is not yet supported (will implement that
soon), so currently the input is clamped to the size of one line. This
shouldn't be a problem for our current set of options though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41577 a95241bf-73f2-0310-859d-f6bbb57e9c96
at the override entry to trigger the overriden vector so that we don't need
to configure any additional redirections.
* Also configures the polarity and trigger modes found in the override entry.
* When disabling the legacy PIC, retrieve the enabled interrupts and re-enable
then in the IO-APIC. This will for example make the ACPI SCI work that is
installed prior to switching interrupt models. Through the transparent support
for interrupt source overrides it'll also automatically relay from the old to
the new vector.
This should make ACPI interrupts work and should support relocating the ISA PIT
from irq 0 to a different global system interrupt (usually 2) so that it can
still work when IO-APICs are in use.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41528 a95241bf-73f2-0310-859d-f6bbb57e9c96
at all and, since there can be multiple IO-APICs, we need to do the
enumeration again in the kernel anyway. Also only set ioapic_phys the first
time we encounter an IO-APIC object as it looks cleaner when we arrive at the
first IO-APIC default address.
* Therefore we don't have to worry about already mapped IO-APICs when
enumerating them in the kernel.
* Also remove the mapping function that is now not used anymore.
* We still use the ioapic_phys field of the kernel args to determine whether
there is an IO-APIC at all to avoid needlessly doing the enumeration again.
This fixes multi IO-APIC configurations, because before we would indeed map
the last IO-APIC listed in the MADT, but then in the kernel assumed we mapped
the first one. We'd end up with mapping the last listed IO-APIC twice and the
first IO-APIC never, always programming the last one when we actually targetted
the first one.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41476 a95241bf-73f2-0310-859d-f6bbb57e9c96
mark the ISA interrupts as unusable and then use ioapic_is_interrupt_available
to determine if that vector is possibly taken by an IO-APIC. If IO-APICs are
not used, this will simply always return false, leaving all vectors free for
MSI use.
* The msi_init() now has to be done after a potential IO-APIC init, so it is now
done after ioapic_init() instead of inside apic_init().
* Add apic_disable_local_ints() to clear the local ints on the local APIC once
we are in APIC mode (i.e. the IO-APIC is set up and we don't need the external
routing anymore).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41445 a95241bf-73f2-0310-859d-f6bbb57e9c96
functional change intended.
* Use an appropriately sized sLevelTriggeredInterrupts for each controller type.
This also fixes an out of bound access for IO-APICs with more than 32 entries
and also returns the right mode in such cases.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41426 a95241bf-73f2-0310-859d-f6bbb57e9c96
* increase _SYS_NAMELEN defined in sys/utsname.h to 128 to allow long(ish) revisions
* sHaikuRevision is now a static character array (in both libroot and kernel)
* adjust build tool set_haiku_revision to write the revision as string
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41389 a95241bf-73f2-0310-859d-f6bbb57e9c96
a folder to some other place in the filesystem hierarchy
* add helper function to VFS that encapsulates the "conversion" of a
vnode-pointer to a fs_vnode-pointer (used by bindfs)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40238 a95241bf-73f2-0310-859d-f6bbb57e9c96
used by tarfs anyway) instead of RLE.
While this should allows larger logo/icons, it doesn't remove the
current 300000 bytes size limits for haiku_loader, so #6710 is not yet fixed.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40215 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The team and thread kernel structures have been renamed to Team and Thread
respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
private kernel headers are included that are no longer C compatible.
Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
that's the limit of the addr_t domain anyway.
* Defined IS_USER_ADDRESS() to !IS_KERNEL_ADDRESS(), which semantically it was
already, just more verbosely.
Should, in the future, avoid hundreds of useless Coverity tickets where the
macros are used.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40093 a95241bf-73f2-0310-859d-f6bbb57e9c96
the parameter (CID 5329).
* _MergeWithOnlyConsumer(): Removed the somewhat weird consumerLocked
parameter. The caller can unlock itself, if desired. Improves Unlock()
readability.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40084 a95241bf-73f2-0310-859d-f6bbb57e9c96
* inherit umask of calling process to images loaded via exec...()
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40071 a95241bf-73f2-0310-859d-f6bbb57e9c96
only a few events can be watched (team creation/deletion/exec, thread creation/
deletion/name changes). The functions start_system_watching()/
stop_system_watching start/stop watching events.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39862 a95241bf-73f2-0310-859d-f6bbb57e9c96
automatically cleaned up when the team is deleted: Class AssociatedData is
the base class for a data item, AssociatedDataOwner a container for them
(struct team derives from it). Functions team_associate_data() and
team_dissociate_data() add/remove data.
* Turned sTeamHash into a BOpenHashTable (necessary since struct team is no
longer a POD).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39860 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implemented missing handling of symbolically linked images and of weak
symbols.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39646 a95241bf-73f2-0310-859d-f6bbb57e9c96
well. This is rather ugly, but it was the quickest way to provide O(1) element
removal. This class could really use some love.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39296 a95241bf-73f2-0310-859d-f6bbb57e9c96
- This is mostly a copy of the x86 32bit paging method and infrastructure, this was copied for two reasons:
1) It is the most complete VM arch
2) The first ARM PAE patches have landed on alkml, so we will have to deal with it in the future as well,
and this infrastructure has proven to be ready ;)
- No protection features, or dirty/accessed tracking yet
- Lots of #if 0
but....
It boots all the way up to init_modules() now, and then dies because of a lack of (ARM) ELF relocation implementation!
Since at this point the VM can be fully initialised, I'm going to focus on CPU exceptions next, so we can get KDL to trigger
when it happens, and I can actually debug from there ;)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39206 a95241bf-73f2-0310-859d-f6bbb57e9c96
userland single-stepping is enabled for the thread.
* x86_exit_user_debug_at_kernel_entry(): Always store DR6 and DR7 in the CPU
structure, not only when breakpoints are installed.
* x86_handle_debug_exception(): When encountering a syscall single-step, also
set the THREAD_FLAGS_DEBUG_THREAD thread flag. Otherwise the
B_THREAD_DEBUG_STOP would be ignored.
* x86 interrupt handling, DISABLE_BREAKPOINTS():
- Renamed to STOP_USER_DEBUGGING().
- Now it also call x86_exit_user_debug_at_kernel_entry() when
THREAD_FLAGS_SINGLE_STEP is set, so that the debug registers are saved.
Fixes#6751.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39201 a95241bf-73f2-0310-859d-f6bbb57e9c96
possible mismatch images info between loader and kernel.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38947 a95241bf-73f2-0310-859d-f6bbb57e9c96
quite hidden bug theme.
This also reduce its RLE compression size, which should fix#6710.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38946 a95241bf-73f2-0310-859d-f6bbb57e9c96
The official release one stay the well-known one, just renamed to show it's trademarked images.
Fixed#6183.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38891 a95241bf-73f2-0310-859d-f6bbb57e9c96
This might help with ACPI shutdown issues, if not this change can be reverted. Not verified as it works on all my machines even without this.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38585 a95241bf-73f2-0310-859d-f6bbb57e9c96
Add support for both discovery and regular iSCSI sessions. Command and status
sequence numbers do differentiate between session and connection but only
one connection per session is currently supported.
Code is Big Endian for now, so compile it for ppc only.
Based on RFC 3720 ff. Tested against OpenSolaris 2009.06.
Resolves most of ticket #5319.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38536 a95241bf-73f2-0310-859d-f6bbb57e9c96
Pass the desired window size from the socket to the service.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38530 a95241bf-73f2-0310-859d-f6bbb57e9c96
Modelled after UDP, add limited TCP support to the boot net stack. The socket
works by queuing received packets as well as sent packets that have not yet
been ACK'ed. Some known issues are documented, especially there's only limited
congestion control. I.e., we send immediately and in unlimited quantity, thus
its use should be restricted to local networks, and due to a fixed window size
there is potential for our socket being overrun with data packets before they
are read. Some corner cases like wrapping sequence numbers may cause a timeout.
The TCP implementation is based on Andrew S. Tanenbaum's "Computer Networks",
4th ed., as well as lecture notes from Prof. W. Effelsberg, the relevant RFCs
and Wikipedia. The pseudo-random number Galois LFSR used for the sequence
number was suggested by Endre Varga.
Since the code is unlikely to get much smaller, better merge it now so that
subsequent changes get easier to review. No platform actively uses TCP sockets
yet, and the receiving code has been reviewed for endianness issues and should
terminate okay after verifying the checksum if no sockets are open.
Based on a version tested with custom code (#5240) as well as with iSCSI.
Compile-tested boot_loader_openfirmware, pxehaiku-loader with gcc4 and
haiku_loader with gcc2. Closes ticket #5240.
Changes from #5240 proposed patch:
* Various bug fixes related to queuing, some memory leaks fixed.
* Never bump the sequence number when dequeuing a packet. It's done afterwards.
* Don't bump the sequence number again when resending the queue or ACK'ing.
* Aggressively ACK while waiting for packets.
* Don't queue sent ACK-only packets.
* More trace output, esp. for queue inspection.
* Adapted use of TCP header flags to r38434.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38472 a95241bf-73f2-0310-859d-f6bbb57e9c96
Fix style issue, pointed out by Axel.
No functional changes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38456 a95241bf-73f2-0310-859d-f6bbb57e9c96
Add protocol number and struct for TCP header.
First minuscule part of ticket #5240.
Checked that pxehaiku_loader still compiles, too.
Changes from proposed patch:
* Simplify struct by merging flags into one 8-bit field.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38434 a95241bf-73f2-0310-859d-f6bbb57e9c96
* _kern_[sg]et_timezone() now accepts/passes out the timezone name, too
* adjust Time preflet and clockconfig to pass the timezone name into the kernel
when calling _kern_set_timezone()
* ajust implementation of tzset() to fetch the timezone name from the kernel
via _kern_get_timezone() instead of reading 'libroot_timezone_info'
* the Time preflet no longer writes 'libroot_timezone_info'
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38164 a95241bf-73f2-0310-859d-f6bbb57e9c96
* made FAT add-on use get_timezone_offset(), this time correctly adjusted for
the difference in units (minutes/seconds)
This makes the times in our FAT-fs agree with Linux again, at least :-)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37905 a95241bf-73f2-0310-859d-f6bbb57e9c96
* dropped DaylightSavingTime from real_time_clock code in kernel, it was
never really being used for what it meant (and just being referred to by
gettimeofday(), which put a different meaning to it
* adjusted the syscalls get_timezone() & set_timezone() as well as their callers
accordingly
* got rid of get_rtc_info() and rtc_info struct in kernel, as it was only
being referred to by the FAT add-on and that one (like gettimeofday()) put a
different meaning to tz_minuteswest. Added a comment to FAT's util.c
showing a possible solution, should the hardcoded GMT timezone pose a problem.
* fixed declaration of gettimeofday() to match POSIX base specs, issue 7
* changed implementation of gettimeofday() to not bother trying to fill struct
timezone - it was using wrong values before, anyway.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37888 a95241bf-73f2-0310-859d-f6bbb57e9c96
* renamed syscalls _kern_[gs]et_tzfilename
to _kern_[gs]et_real_time_clock_is_gmt, as the filename part is no longer
relevant (and the two corresponding parameters were removed)
* C++-ified and reworked clockconfig to use the info from 'Time settings'
to setup the timezone info during boot
* removed invocation of _kern_get_tzfilename() from tzset(), as the syscall
no longer exists and tzset() is currently broken anyway
* adjusted the Time preflet to use the renamed syscall when getting/setting
the RTC info
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37881 a95241bf-73f2-0310-859d-f6bbb57e9c96
video_splash.cpp and boot_splash.cpp will be updated to utilize it.
Relates to #6183. See #6255 for issues on using 'splash_logo-development.png'
with generate_boot_screen.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37739 a95241bf-73f2-0310-859d-f6bbb57e9c96
See http://clang.llvm.org/compatibility.html#c++ why it is needed. Note that HashMap.h Key and Value are typenames as well.
Afaict this is correctly done, builds and runs on gcc4. This fixes#5892.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37550 a95241bf-73f2-0310-859d-f6bbb57e9c96
vm_page::Init().
* Made vm_page::wired_count private and added accessor methods.
* Added VMCache::fWiredPagesCount (the number of wired pages the cache
contains) and accessor methods.
* Made more use of vm_page::IsMapped().
* vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be
used to request a special handling for wired pages. If given the wired pages
are replaced by copies and the original pages are moved to the upper cache.
* vm_copy_area():
- We don't need to do any wired ranges handling, if the source area is a
B_SHARED_AREA, since we don't touch the area's mappings in this case.
- We no longer wait for wired ranges of the concerned areas to disappear.
Instead we use the new vm_copy_on_write_area() feature and just let it
copy the wired pages. This fixes#6288, an issue introduced with the use
of user mutexes in libroot: When executing multiple concurrent fork()s all
but the first one would wait on the fork mutex, which (being a user mutex)
would wire a page that the vm_copy_area() of the first fork() would wait
for.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
Suggested by Ingo in ticket #6139. Code is adapted from x86.
Note that on ppc64 GPR1 needs to be 64-bit, thus the choice of addr_t.
Resolves part of ticket #6160.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37281 a95241bf-73f2-0310-859d-f6bbb57e9c96
In r33670 the svn:eol-style property was dropped, which took care of
locally converting the line endings to the user's native style.
While most files use Unix-style LF line endings, some files have
Windows-style CR LF line endings.
Assure that the following r37262 directories use Unix-style line endings:
src/system/boot/
src/system/boot/arch/
src/system/boot/arch/ppc/
src/system/boot/loader/
src/system/boot/loader/net/
src/system/boot/platform/
src/system/boot/platform/openfirmware/
src/system/boot/platform/openfirmware/arch/
src/system/boot/platform/openfirmware/arch/ppc/
src/system/kernel/
src/system/kernel/arch/
src/system/kernel/arch/ppc/
src/system/kernel/platform/
src/system/kernel/platform/openfirmware/
headers/private/kernel/
headers/private/kernel/arch/
headers/private/kernel/arch/ppc/
headers/private/kernel/platform/
headers/private/kernel/platform/openfirmware/
headers/private/kernel/boot/
headers/private/kernel/boot/net/
headers/private/kernel/boot/platform/
headers/private/kernel/boot/platform/openfirmware/
This avoids patches containing irrelevant lines unintentionally converted.
No functional changes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37265 a95241bf-73f2-0310-859d-f6bbb57e9c96
headers and respectively added includes in source files.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37259 a95241bf-73f2-0310-859d-f6bbb57e9c96
ClearAccessedAndModified() implementations into helper methods PageUnmapped()
and UnaccessedPageUnmapped() in the base class.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37187 a95241bf-73f2-0310-859d-f6bbb57e9c96
vm_available_not_needed_memory() version that can be called from within the
kernel debugger.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37167 a95241bf-73f2-0310-859d-f6bbb57e9c96
kernel private.
* Moved dumping code from dump_cache() to new VMCache::Dump().
* Override VMCache::Dump() in VMVnodeCache to also print the vnode.
* Removed no longer needed VMCache::GetLock().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37138 a95241bf-73f2-0310-859d-f6bbb57e9c96
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
Kernel doesn't use it, and it could be regenerated in the kernel if it did need it.
This also unlocks the apic range the bios can use. Previously the apic ids would have
to fit within 0..MAX_CPUS or it'd reject the cpu. Some boxes (mine in particular)
seem to sparsely populate the apic id so that the range is pretty large.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37108 a95241bf-73f2-0310-859d-f6bbb57e9c96
was used.
* Renamed X86VMTranslationMap to X86VMTranslationMap32Bit and pulled the paging
method agnostic part into new base class X86VMTranslationMap.
* Moved X86PagingStructures into its own header/source pair.
* Moved pgdir_virt from X86PagingStructures to X86PagingStructures32Bit where
it is actually used.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37055 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed i386_context_switch() to x86_context_switch().
* x86_context_switch() no longer sets the page directory.
arch_thread_context_switch() does that explicitly, now. This allows to solve
the TODO by reordering releasing the previous paging structures reference and
setting the new page directory.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37024 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed vm_translation_map_arch_info to X86PagingStructures, and all
members and local variables of that type accordingly.
* arch_thread_context_switch(): Added TODO: The still active paging structures
can indeed be deleted before we stop using them.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37022 a95241bf-73f2-0310-859d-f6bbb57e9c96
that are wide enough for both virtual and physical addresses.
* DMABuffer, IORequest, IOScheduler,... and code using them: Use
generic_io_vec and generic_{addr,size}_t where necessary.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36997 a95241bf-73f2-0310-859d-f6bbb57e9c96
address ranges, and a set of support functions working with it.
* Changed the type of the kernel_args physical address range arrays to
phys_addr_range and adjusted the code working with those.
* Removed a bunch of duplicated address range code in the PPC's mmu.cpp.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36947 a95241bf-73f2-0310-859d-f6bbb57e9c96
where appropriate.
* Typedef'ed page_num_t to phys_addr_t and used it in more places in
vm_page.{h,cpp}.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
created, and moved the heap's grow and VIP heap initialization to it. Should
fix#5956.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36855 a95241bf-73f2-0310-859d-f6bbb57e9c96
map_backing_store() doesn't commit memory when this flag is given.
* Used the new flag vm_copy_area(): We no longer commit memory for read-only
areas. This prevents read-only mapped files from suddenly requiring memory
after fork(). Might improve the situation on machines with very little RAM
a bit.
We should probably mark writable copies over-committing, since the usual
case is fork() + exec() where the child normally doesn't need more than a
few pages until calling exec(). That would significantly reduce the memory
requirement for jamming the Haiku tree.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36651 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Separated the other stuff previously done in debug_init_post_vm() to the new
debug_init_post_settings().
* Removed superfluous status_t return codes - they are ignored, anyway, and if
there really is a show stopper in the init process, panicking would be the
thing one should do.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36623 a95241bf-73f2-0310-859d-f6bbb57e9c96
implemented for any architecture yet.
* vm_set_area_memory_type(): Call VMTranslationMap::ProtectArea() to change the
memory type for the already mapped pages.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36574 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Don't set the VMArea's memory type in arch_vm_set_memory_type(), but let the
callers do that.
* vm_set_area_memory_type(): Does nothing, if the memory type doesn't change.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36573 a95241bf-73f2-0310-859d-f6bbb57e9c96
of consistency.
* Moved the B_OVERCOMMITTING_AREA flag from B_KERNEL_AREA_FLAGS to
B_USER_AREA_FLAGS, since we really allow it to be passed from userland.
* Most VM syscalls check the provided protection against B_USER_AREA_FLAGS
instead of B_USER_PROTECTION, now. This way they allow for
B_OVERCOMMITTING_AREA as well.
* _user_map_file(), _user_set_memory_protection(): Check the protection like
the other syscalls do and use fix_protection() instead of doing that
manually.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36572 a95241bf-73f2-0310-859d-f6bbb57e9c96
boot CPU wait until all other CPUs are ready to wait. This solves a
theoretical problem in main(): The boot CPU could run fully through the early
initialization and reset sCpuRendezvous2 before the other CPUs left
smp_cpu_rendezvous(). It's very unlikely on real hardware that the non-boot
CPUs are so much slower, but it might be a concern in emulation.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36558 a95241bf-73f2-0310-859d-f6bbb57e9c96
scheduler. This avoids the need to use the send_signal_etc() work-around for
resume_thread() during the early kernel initialization. Might fix#5851.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36530 a95241bf-73f2-0310-859d-f6bbb57e9c96
free respective free and cached pages.
* Removed the unused vm_page_allocate_page_run_no_base().
* vm_page_allocate_page_run() (and allocate_page_run()):
- Use vm_page_reserve_pages() instead of vm_page_try_reserve_pages(), i.e.
wait until the reservation succeeds.
- Now we iterates two times through the pages to find a suitable page run. In
the first iteration it only looks for free/clear pages, in the second
iteration it also considers cached pages. This increases the chance of the
function to succeed, when a lot of caching is going on.
This reduces the amount of memory required to use the IOCache when booting
off the anyboot Live CD to around 160 MB in qemu. It also seems to work with
128 MB, but the syslog indicates that some memory allocations fail, which
is not exactly inspiring confidence.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36489 a95241bf-73f2-0310-859d-f6bbb57e9c96
swap space when the cache shrinks. Currently the implementation stil leaks
swap space of busy pages.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36373 a95241bf-73f2-0310-859d-f6bbb57e9c96
can safely be used.
* Since using the I/O APIC is disabled by default, I've removed the "return"
that prevented its use when enabled. Let's see if it already does anything.
* Adapted other arch_int.cpp with a bit of cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36290 a95241bf-73f2-0310-859d-f6bbb57e9c96
mapped page.
* debug_{mem,strl}cpy():
- Added "team" parameter for specifying the address space the address are
to be interpreted in.
- When the standard memcpy() (with fault handler) fails, fall back to
vm_debug_copy_page_memory().
* Added debug_is_debugged_team(): Predicate returning true, if the supplied
team_id refers to the same team debug_get_debugged_thread() belongs to.
* Added DebuggedThreadSetter class for scope-based debug_set_debugged_thread().
Made use of it in several debugger functions.
* print_demangled_call() (x86): Fixed unsafe memory access.
Allows KDL stack traces to work correctly again, even if the page daemon has
already unmapped the concerned pages.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36230 a95241bf-73f2-0310-859d-f6bbb57e9c96
interrupts (MSI).
* Add the remaining IDT entries and redirection functions in the interrupt code.
* Make the PIC end_of_interrupt() return a result to indicate whether the vector
was handled by this PIC. If it isn't we now issue a apic_end_of_interrupt()
in the assumption of apic local interrupt, MSI or IPI. This also removes
the need for the gUsingIOAPIC global and doing manual apic_end_of_interrupt()
calls in the SMP and timer code.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36221 a95241bf-73f2-0310-859d-f6bbb57e9c96
other places where previously the same functionality was duplicated. Also
seperated the header which was originally arch_smp.h into apic.h and arch_smp.h
again as some of it is MP and not actually APIC.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36182 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implemented a tiny bit more sophisticated version of
estimate_max_scheduling_latency() that uses a syscall that lets the scheduler
decide.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36170 a95241bf-73f2-0310-859d-f6bbb57e9c96
locks.
* Added syscalls for a new kind of mutex. A mutex consists only of an int32 and
doesn't require any kernel resources. So it's initialization cannot fail
(it consists only of setting the mutex value to 0). An uncontended lock or
unlock operation can basically consist of an atomic_*() in userland. The
syscalls (when the mutex is contended) are a bit more expensive than semaphore
operations, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36158 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added new thread flag THREAD_FLAGS_ALWAYS_RESTART_SYSCALL. If set, it forces
syscall restart even when a signal handler without SA_RESTART was invoked.
* Fixed sigwait(): If one of requested signals wasn't already pending it would
never wake up. Also, the syscall always needs to be restarted, if interrupted
by another signal.
* Renamed a bunch of the POSIX signal function implementations which did return
an error code directly (instead via errno). Added correct POSIX functions
where needed.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36054 a95241bf-73f2-0310-859d-f6bbb57e9c96
return it.
* lock_memory_etc(): On error the VMAreaWiredRange object could be leaked.
* [un]lock_memory_etc(): Call VMArea::Unwire() with the cache locked and
explicitly delete the range object after unlocking the cache to avoid
potential deadlocks.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36035 a95241bf-73f2-0310-859d-f6bbb57e9c96
Since the requirement is that the area's top cache is locked, allocating
memory isn't allowed.
* lock_memory_etc(): Create the VMAreaWiredRange object explicitly before
locking the area's top cache.
Fixes#5680 (deadlocks when using the slab as malloc() backend).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36033 a95241bf-73f2-0310-859d-f6bbb57e9c96
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96
Added fields for temporary storage of the debug registers dr6 and dr7 to the
arch_cpu_info structure. The actual registers are stored at the beginning of
x86_exit_user_debug_at_kernel_entry() and read in
x86_handle_debug_exception().
The problem was that x86_exit_user_debug_at_kernel_entry() itself overwrote
dr7 and, if kernel breakpoints were enabled, dr6 could be overwritten anytime
after. So x86_handle_debug_exception() would find incorrect values in the
registers (definitely in dr7) and thus interpret the detected debug condition
incorrectly. Usually watchpoints were recognized as breakpoints.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35951 a95241bf-73f2-0310-859d-f6bbb57e9c96
arch_debug_registers instead.
* Call arch_debug_save_registers() on all CPUs when entering the kernel
debugger.
* Added debug_get_debug_registers() to return a specified CPU's saved
registers.
* x86:
- Replaced the previous arch_debug_save_registers() implementation. Disabled
getting the registers via the gdb interface for the time being.
- Fixed the "sc", "call", and "calling" commands to also work for threads
running on another CPU.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35907 a95241bf-73f2-0310-859d-f6bbb57e9c96
support file creation.
* Extended open() and open_from() to support O_CREAT to create files.
open_from() has got an optional "permissions" parameter for that purpose.
* Fixed errno. It would crash when being used. Also changed the POSIX functions
to return their error code via errno as expected.
* Added writev().
* FAT file system:
- Added support for reading long file names.
- Added support for creating files (8.3 name only) and writing to them.
- Enabled scanning partitions with it.
* Boot loader menu:
- Enabled the "Reboot" menu item unconditionally.
- Added "Save syslog from previous session" menu item to the debug menu.
Currently saving the syslog to FAT32 volumes is supported.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35882 a95241bf-73f2-0310-859d-f6bbb57e9c96
received, but if the signal was in the thread's signal block mask, it would
not be handled. Added thread::sig_temp_enabled, an additional mask of not
blocked signals, which is set by sigsuspend() and evaluated and reset by
handle_signals(). Fixes#5567.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35836 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The kernel syslog ring buffer is no longer emptied by the syslog sender
thread. Instead we only drop the oldest data from the buffer when we're
writing to it and there's not enough free space in it.
Advantages: We drop old data rather than the most recent data when the buffer
is full. The "syslog" KDL command has more data available now. So the odds
are that kernel syslog messages not written to disk yet are at least still
in the kernel buffer.
* Changed dprintf_no_syslog() semantics: Now it writes to the syslog, but
doesn't notify the syslog sender thread.
boot loader:
* Added the ring_buffer implementation and a dummy user_memcpy().
* bios_x86: Moved the syslog stuff from serial.{cpp,h} to debug.{cpp.h}.
* Moved the debug options from the "Select safe mode options" menu to a new
"Select debug options" menu.
* Added option "Enable debug syslog" to the new menu (ATM available on x86
only). It allocates a 1 MB in-memory buffer for the syslog for this session
in such a way that it can be accessed by the boot loader after a reset.
* Added item "Display syslog from previous session" to the new menu, doing
what its name suggests.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35816 a95241bf-73f2-0310-859d-f6bbb57e9c96
a given flat buffer.
* Added ring_buffer_peek() for random position reading from the ring buffer
without changing its state.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35813 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Made the page table allocation more flexible. Got rid of sMaxVirtualAddress
and added new virtual_end address to the architecture specific kernel args.
* Increased the virtual space we reserve for the kernel to 16 MB. That
should suffice for quite a while. The previous 2 MB were too tight when
building the kernel with debug info.
* mmu_init(): The way we were translating the BIOS' extended memory map to
our physical ranges arrays was broken. Small gaps between usable memory
ranges would be ignored and instead marked allocated. This worked fine for
the boot loader and during the early kernel initialization, but after the
VM has been fully set up it frees all physical ranges that have not been
claimed otherwise. So those ranges could be entered into the free pages
list and would be used later. This could possibly cause all kinds of weird
problems, probably including ACPI issues. Now we add only the actually
usable ranges to our list.
Kernel:
* vm_page_init(): The pages of the ranges between the usable physical memory
ranges are now marked PAGE_STATE_UNUSED, the allocated ranges
PAGE_STATE_WIRED.
* unmap_and_free_physical_pages(): Don't free pages marked as unused.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35726 a95241bf-73f2-0310-859d-f6bbb57e9c96
happen on syscalls or "int" instructions. The debug exception handler sets
the thread debug flags B_THREAD_DEBUG_STOP and
B_THREAD_DEBUG_NOTIFY_SINGLE_STEP (new) and lets the thread continue. Before
leaving the kernel the thread is stopped and a single-step notification is
sent. Fixes#3487.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35620 a95241bf-73f2-0310-859d-f6bbb57e9c96
over ownership of the object. Fixes double free introduced in r35605.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35608 a95241bf-73f2-0310-859d-f6bbb57e9c96
they were never freed unless the cache was destroyed (I just wondered why
my system would bury >1G in the magazines).
* Made the magazine capacity variable per cache, ie. for larger objects, it's
not a good idea to have 64*CPU buffers lying around in the worst case.
* Furthermore, the create_object_cache_etc()/object_depot_init() now have
arguments for the magazine capacity as well as the maximum number of full
unused magazines.
* By default, you might want to initialize both to zero, as then some hopefully
usable defaults are computed. Otherwise (the only current example is the
vm_page_mapping cache) you can just put in the values you'd want there.
The page mapping cache uses larger values, as its objects are usually
allocated and deleted in larger chunks.
* Beware, though, I couldn't test these changes yet as Qemu didn't like to run
today. I'll test these changes on another machine now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35601 a95241bf-73f2-0310-859d-f6bbb57e9c96
needs to be or'ed to the address specification), "uncached" is assumed.
* Set the memory type for the "BIOS" and "DMA" areas to write-back. Not sure, if
that's correct, but that's what was effectively used on my machines before.
* Changed x86_set_mtrrs() and the CPU module hook to also set the default memory
type.
* Rewrote the MTRR computation once more:
- Now we know all used memory ranges, so we are free to extend used ranges
into unused ones in order to simplify them for MTRR setup.
- Leverage the subtractive properties of uncached and write-through ranges to
simplify ranges of any other respectively write-back type.
- Set the default memory type to write-back, so we don't need MTRRs for the
RAM ranges.
- If a new range intersects with an existing one, we no longer just fail.
Instead we use the strictest requirements implied by the ranges. This fixes
#5383.
Overall the new algorithm should be sufficient with far less MTRRs than before
(on my desktop machine 4 are used at maximum, while 8 didn't quite suffice
before). A drawback of the current implementation is that it doesn't deal with
the case of running out of MTRRs at all, which might result in some ranges
having weaker caching/memory ordering properties than requested.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35515 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added vm_clear_page_mapping_accessed_flags() and
vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality
of vm_test_map_activation(), vm_clear_map_flags(), and
vm_remove_all_page_mappings(), thus saving lots of calls to translation map
methods. The backend is the new method
VMTranslationMap::ClearAccessedAndModified().
* Started to make use of the cached page queue and changed the meaning of the
other non-free queues slightly:
- Active queue: Contains mapped pages that have been used recently.
- Inactive queue: Contains mapped pages that have not been used recently. Also
contains unmapped temporary pages.
- Modified queue: Contains unmapped modified pages.
- Cached queue: Contains unmapped unmodified pages (LRU sorted).
Unless we're actually low on memory and actively do paging, modified and
cached queues only contain non-temporary pages. Cached pages are considered
quasi free. They still belong to a cache, but since they are unmodified and
unmapped, they can be freed immediately. And this is what
vm_page_[try_]reserve_pages() do now when there are no more actually free
pages at hand. Essentially this means that pages storing cached file data,
unless mmap()ped, no longer are considered used and don't contribute to page
pressure. Paging will not happen as long there are enough free + cached pages
available.
* Reimplemented the page daemon. It no longer scans all pages, but instead works
the page queues. As long as the free pages situation is harmless, it only
iterates through the active queue and deactivates pages that have not been
used recently. When paging occurs it additionally scans the inactive queue and
frees pages that have not been used recently.
* Changed the page reservation/allocation interface:
vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and
vm_page_allocate_page() now take a vm_page_reservation structure pointer.
The reservation functions initialize the structure -- currently consisting
only of a count member for the number of still reserved pages.
vm_page_allocate_page() decrements the count and vm_page_unreserve_pages()
unreserves the remaining pages (if any). Advantages are that reservation/
unreservation mismatches cannot occur anymore, that vm_page_allocate_page()
can verify that the caller has indeed a reserved page left, and that there's
no unnecessary pressure on the free page pool anymore. The only disadvantage
is that the vm_page_reservation object needs to be passed around a bit.
* Reworked the page reservation implementation:
- Got rid of sSystemReservedPages and sPageDeficit. Instead
sUnreservedFreePages now actually contains the number of free pages that
have not yet been reserved (it cannot become negative anymore) and the new
sUnsatisfiedPageReservations contains the number of pages that are still
needed for reservation.
- Threads waiting for reservations do now add themselves to a waiter queue,
which is ordered by descending priority (VM priority and thread priority).
High priority waiters are served first when pages become available.
Fixes#5328.
* cache_prefetch_vnode(): Would reserve one less page than allocated later, if
the size wasn't page aligned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
general "flags" parameter. It encodes the target state of the page -- so
that the page isn't unnecessarily put in the wrong page queue first -- a
flag whether the page should be cleared, and one to indicate whether the
page should be marked busy.
* Added page state PAGE_STATE_CACHED. Not used yet.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
flag. The obvious advantage is that one can still see what state a page is in
and even move it between states while being marked busy.
* Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which
in all cases has the same effect. Introduced a vm_page_is_dummy() that can
still check whether a given page is a dummy page.
* vm_page_unreserve_pages(): Before adding to the system reserve make sure
sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages
available for allocation. steal_pages() still has the same problem and it
can't be solved that easily.
* map_page(): No longer changes the page state/mark the page unbusy. That's the
caller's responsibility.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
memory and page reservation functions have a new "priority" parameter that
indicates how deep the function may tap into that reserve. The currently
existing priority levels are "user", "system", and "VIP". The idea is that
user programs should never be able to cause a state that gets the kernel into
trouble due to heavy battling for memory. The "VIP" level (not really used
yet) is intended for allocations that are required to free memory eventually
(in the page writer). More levels are thinkable in the future, like "user real
time" or "user system server".
* Added "priority" parameters to several VMCache methods.
* Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags"
parameter.
* Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag
CACHE_PRIORITY_VIP indicating the importance of the request.
* Changed most code to pass the right priorities/flags.
These changes already significantly improve the behavior in low memory
situations. I've tested a bit with 64 MB (virtual) RAM and, while not
particularly fast and responsive, the system remains at least usable under high
memory pressure.
As a side effect the slab allocator can now be used as general memory allocator.
Not done by default yet, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added support to do larger raw allocations (up to one large chunk (128 pages))
in the slab areas. For an even larger allocation an area is created (haven't
seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
would be added to the free lists instead of being removed from them. This
would corrupt the lists and also lead to all kinds of misuse of meta chunks.
object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
case. The function could deadlock itself, since
HashedObjectCache::CreateSlab() does allocate memory, thus potentially
reentering.
* object_cache_low_memory():
- I missed some returns when reworking that one in r35254, so the function
might stop early and also leave the cache in maintenance mode, which would
cause it to be ignored by object cache resizer and low memory handler from
that point on.
- Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
and too many slabs could be freed.
- Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
unlock the cache, the situation might have changed and their might not be an
empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
MemoryManager. object_cache_get_usage() would thus always return 0 and the
block cache would not be considered cached memory. This was only of
informational relevance, though.
slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
Allocations of those sizes aren't so common that the object depots yield any
benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
memory from the MemoryManager, and the hash tables for HashedObjectCaches use
the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
(USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
has virtually no lock contention. Handling for low memory situations is yet
missing, though.
* Improved the output of some debugger commands.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added optional parameter "void** oldTable" to Resize(). If given the old
allocation for the table is returned instead of freeing it.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35278 a95241bf-73f2-0310-859d-f6bbb57e9c96
CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager
does not wait when reserving memory or pages. The latter prevents area
operations. The new flags add a bit of flexibility. E.g. when allocating page
mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient,
i.e. the allocation will succeed as long as pages are available.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.
other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
one for each per CPU store):
* The depot is now protected by a R/W lock combined with a spinlock. It is
required to either hold read lock + spinlock or just the write lock.
* When accessing the per CPU stores we only need to acquire the read lock
and disable interrupts. When switching magazines with the depot we
additionally get the spinlock.
* When allocating a new magazine we do completely unlock.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35200 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The threads beside the main thread are killed earlier now (in the new
team_shutdown_team()), before removing the team from the team hash and from
its process group. This fixes#5296.
* Use a condition variable instead of a semaphore to wait for the non-main
threads to die. We notify the condition right after a thread has left the
team. The semaphore was released by the undertaker.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35196 a95241bf-73f2-0310-859d-f6bbb57e9c96
things a bit.
* Some style cleanup.
* The object depot does now have a cookie that will be passed to the return
hook.
* Fixed object_cache_return_object_wrapper() using the new cookie.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35174 a95241bf-73f2-0310-859d-f6bbb57e9c96
VMCacheRef object which points to the cache. This allows to optimize
VMCache::MoveAllPages(), since it no longer needs to iterate over all pages
to adjust their cache pointer. It can simple swap the cache refs of the two
caches instead.
Reduces the total -j8 Haiku image build time only marginally. The kernel time
drops almost 10%, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35155 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added "bool consumerLocked" parameter to VMCache::Unlock() and
ReleaseRefAndUnlock(). Since Unlock() may cause the cache to be merged with
a consumer cache, the flag is needed to prevent a deadlock in case the
caller still holds a lock to the consumer. Hasn't been a problem yet, since
that situation never occurred.
* VMCacheChainLocker: Reversed unlocking order to bottom-up. The other
direction could cause a deadlock in case caches would be merged, since the
locking order would be reversed. The way VMCacheChainLocker was used this
didn't happen, though.
* fault_get_page(): While copying a page from a lower cache to the top cache,
we do now unlock all caches but the top one, so we don't unnecessarily
kill concurrency.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35153 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Reorganized the code for [un]mapping pages:
- Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what
vm_unmap_page[s]() did before, just in the architecture specific code, which
allows for specific optimizations. UnmapArea() is for the special case that
the complete area is unmapped. Particularly in case the address space is
deleted, some work can be saved. Several TODOs could be slain.
- Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]()
are now static and have lost their prefix (and the "preserveModified"
parameter).
* Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers
for Protect().
* X86VMTranslationMap::Protect(): Make sure not to accidentally clear the
accessed/dirty flags.
* X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually
work. It was only skipping to the next page.
* Adjusted the PPC code to at least compile.
No measurable effect for the -j8 Haiku image build time, though the kernel time
drops minimally.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Pulled the physical page mapping functions out of vm_translation_map into
a new interface VMPhysicalPageMapper.
* Renamed vm_translation_map to VMTranslationMap and made it a proper C++
class. The functions in the operations vector have become methods.
* Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper
as far as possible (without actually writing new code).
* Adjusted the x86 and the PPC specifics accordingly (untested for the
latter). For the other architectures the build is, I'm afraid, seriously
broken.
The next steps will modify and extend the VMTranslationMap interface, so that
it will be possible to fix the bugs in vm_unmap_page[s]() and employ
architecture specific optimizations.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
* ioapic_init(): map_physical_memory() was called for already mapped
addresses. This worked fine, but only because the x86 page mapping code
didn't mind.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35059 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Use atomic_{and,or}() instead of atomic_set(), as there are no built-ins
for the latter.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35021 a95241bf-73f2-0310-859d-f6bbb57e9c96
This makes appending the pages to the active queue more efficient and we
don't need the vm_page::is_cleared bit anymore.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35011 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added Debug{First,Next}() methods to allow easy iteration through the
address spaces in kernel debugger commands.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34978 a95241bf-73f2-0310-859d-f6bbb57e9c96
device path + child partition name. When a "raw" device is unpublished the node
removal notification triggers the partition and child partitions to be
unpublished/removed. Since in that case the "raw" node is already unpublished
trying to resolve it in devfs_unpublish_partition() again to unpublish the child
partitions would fail, leaving the child partition nodes behind. When a new raw
device would then become available publishing its partitions would fail because
of these left behind nodes, causing bug #4587. Seeing that this code is more
compact and straight forward anyway I don't quite see why it was changed in the
first place.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34967 a95241bf-73f2-0310-859d-f6bbb57e9c96
table. It is now inline and uses double-checked locking. This reduces the
contention on the lock to insignificant. Total -j8 Haiku image build speedup
is marginal, but the total kernel time drops 12%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34934 a95241bf-73f2-0310-859d-f6bbb57e9c96
access to a vm_page. It is basically an atomically accessed thread ID field
in the vm_page structure, which is explicitly set by macros marking the
critical sections. As a first positive effect I had to review quite a bit of
code and found several issues.
* Added several TODOs and comments. Some harmless ones, but also a few
troublesome ones in vm.cpp regarding page unmapping.
* file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous
vm_page_allocate_page() return value checks. It cannot fail anymore.
* Removed the heavily contended "pages" lock. We use different policies now:
- sModifiedTemporaryPages is accessed atomically.
- sPageDeficitLock and sFreePageCondition are protected by a new mutex.
- The page queues have individual locks (mutexes).
- Renamed set_page_state_nolock() to set_page_state(). Unless the caller says
otherwise, it does now lock the affected pages queues itself. Also changed
the return value to void -- we panic() anyway.
* set_page_state(): Add free/clear pages to the beginning of their respective
queues as this is more cache-friendly.
* Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer
in any queue. They were in the "active" queue, but there's no good reason
to have them there. In case we decide to let the page daemon work the queues
(like FreeBSD) they would just be in the way.
* Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper
function. Also fixed a bug I introduced previously: The functions must not
vm_page_unreserve_pages() on success, since they remove the pages from the
free/clear queue without decrementing sUnreservedFreePages.
* vm_page_set_state(): Changed return type to void. The function cannot really
fail and no-one was checking it anyway.
* vm_page_free(), vm_page_set_state(): Added assertion: The page must not be
free/clear before. This is implied by the policy that no-one is allowed to
access free/clear pages without holding the respective queue's lock, which is
not the case at this point. This found the bug fixed in r34912.
* vm_page_requeue(): Added general assertions. panic() when requeuing of
free/clear pages is requested. Same reason as above.
* vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is
still not correct, though.
My usual -j8 Haiku build test runs another 10% faster, now. The total kernel
time drops about 18%. As hoped the new locks have only a fraction of the old
"pages" lock contention. Other locks lead the "most wanted list" now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed page_queue to VMPageQueue and made it a proper C++ class. Use
DoublyLinkedList instead of own list code.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34874 a95241bf-73f2-0310-859d-f6bbb57e9c96
have one anymore anyway.
* Removed unnecessary setting the list links to NULL after removing a node.
* Replaced "element == NULL" check in Insert() by an assert. This just hid
potential errors.
* Added Insert{Before,After}() methods and declared the Insert() version
with the InsertBefore() semantics obsolete.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34873 a95241bf-73f2-0310-859d-f6bbb57e9c96
sure that the kernel's frame buffer console points to the right data.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34835 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Changed the rw_lock_{read,write}_unlock() return values to void. They
returned a value != B_OK only in case of user error and no-one checked them
anyway.
* Optimized rw_lock_read_[un]lock(). They are inline now and as long as
there's no contending write locker, they will only perform an atomic_add().
* Changed the semantics of nested locking after acquiring a write lock: Read
and write locks are counted separately, so read locks no longer implicitly
become write locks. This does e.g. make degrading a write lock to a read
lock by way of read_lock + write_unlock (as used in the VM) actually work.
These changes speed up the -j8 Haiku image build on my machine by a few
percent, but more interestingly they reduce the total kernel time by 25 %.
Apparently we get more contention on other locks, now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34830 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added VMCache::MovePage() and MoveAllPages() to move pages between caches.
* VMAnonymousCache:
- _MergeSwapPages(): Avoid doing anything, if neither cache has swapped out
pages.
- _MergeSwapPages() does now also remove source cache pages that are
shadowed by consumer swap pages. This allows us to call _MergeSwapPages()
before _MergePagesSmallerSource(), save the swap page shadowing check
there and get rid of the vm_page::merge_swap flag. This is an
optimization based on the assumption that usually none or only few pages
are swapped out, so we save a lot of checks.
- Implemented _MergePagesSmallerConsumer() as an alternative to
_MergePagesSmallerSource(). The former is used when the source cache has
more pages than the consumer cache. It iterates over the consumer cache's
pages, moves them to the source and finally moves all pages back to the
consumer. The final move is relatively cheap (though unfortunately we
still have to update all pages' vm_page::cache field), so that overall we
save iterations of the main loop with the more expensive checks.
The optimizations particularly improve the common fork()+exec*() situations.
fork() uses CoW, which is implemented by putting two new empty caches between
the to be copied area and its cache. exec*() destroys one copy of the area,
its cache and thus causes merging of the other new cache with the old cache.
Since this usually happens in a very short time, the old cache does still
contain many pages and the new cache only few. Previously the many pages were
all checked and moved individually. Now we do that for the few pages instead.
A very extreme example of this situation is the Haiku image build. jam has a
huge heap (> 200 MB) and it fork()s+exec*()s for every action to be executed.
Since during the cache merging the cache is locked, any write access to a
heap page causes jam to block until the cache merging is done. Formerly that
took so long that it killed a lot of parallelism in multi-job builds. That
could be observed particularly well when lots of small actions where executed
(like the Link, XRes, Mimeset, SetType, SetVersion combos when building
executables/libraries/add-ons). Those look dramatically better now.
The overall speed improvement for a -j8 image build on my machine is only
about 15%, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34784 a95241bf-73f2-0310-859d-f6bbb57e9c96
another. The code originates from vm_copy_on_write_area(). We now generate
the VM cache tracing entries, though.
* count_writable_areas() -> VMCache::CountWritableAreas()
* Added debugger command "cache_stack" which is enabled when VM cache tracing
is enabled. It prints the source caches of a given cache or area at the
time of a specified tracing entry.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34751 a95241bf-73f2-0310-859d-f6bbb57e9c96