The return value of Inode::WaitForRequest() is status_t not bool. So the
method would always fail when it actually succeeded. This affected reads
from pipes which didn't have data. The bug was hidded since VFS code
mostly checks error codes only against < B_OK, so that such a read would
be treated as 0 byte read.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23976 a95241bf-73f2-0310-859d-f6bbb57e9c96
total count of allocations and bytes.
* Also add a few more bin sizes (for 8, 24 and 48 bytes) turns out especially
allocations of 20-24 bytes are pretty common. And as it only wastes a few
bytes per page this doesn't hurt at all.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23961 a95241bf-73f2-0310-859d-f6bbb57e9c96
heap leak check info would otherwise be overwritten for allocations that still
fit the 16 byte bin (i.e. allocations of 0-4 bytes).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23956 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Tracing of allocations, reallocations and frees
* Leak checking infrastructure to dump allocations
The leak checking code records the team and thread id when an allocation is
made as well as stores the originally requested size. It also adds the
"allocations" debugger command that can dump all current allocations (usually
a huge list) or filter by either a team or thread id. This way it's easily
possible to find leftover allocations of no more active teams/threads.
Combined with the tracing support one might be able to track down the time and
reason of an allocation and possibly find the corresponding leak if it is one.
Note that kernel heap leak checking has to be enabled manually by setting the
KERNEL_HEAP_LEAK_CHECK define to 1.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23953 a95241bf-73f2-0310-859d-f6bbb57e9c96
and pages are now kept in lists as well. This allows to return free pages once
a bin does not need them anymore. Partially filled pages are kept in a sorted
linked list so that allocation will always happen on the fullest page - this
favours having full pages and makes it more likely lightly used pages will get
completely empty so they can be returned. Generally this now goes more in the
direction of a slab allocator.
The allocation logic has been extracted, so a heap is now simply attachable to
a region of memory. This allows for multiple heaps and for dynamic growing. In
case the allocator runs out of free pages, an asynchronous growing thread is
notified to create a new area and attach a new heap to it.
By default the kernel heap is now set to 16MB and grows by 8MB each time all
heaps run full.
This should solve quite a few issues, like certain bins just claiming all pages
so that even if there is free space nothing can be allocated. Also it obviously
does aways with filling the heap page by page until it overgrows.
I think this is now a well performing and scalable allocator we can live with
for quite some time. It is well tested under emulation and real hardware and
performs as expected. If problems come up there is an extensive sanity checker
that can be enabled by PARANOID_VALIDATION that covers most aspects of the
allocator. For normal operation this is not necessary though and is therefore
disabled by default.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23939 a95241bf-73f2-0310-859d-f6bbb57e9c96
not use a single static variable to synchronize CPUs at two points. In an
environment where CPUs do not really run concurently (in emulation or with
logical processors) it would be possible for CPUs to get trapped in the first
synchronization while another CPU might just do its thing and change the
sync variable again. These CPUs would then never leave the first loop as the
exit condition has already passed again. The key is to use two different sync
variables like it is done in early kernel initialization. As I didn't manage
to trigger this code though I am not sure if this is gonna work.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23926 a95241bf-73f2-0310-859d-f6bbb57e9c96
using the int 99 syscall method. Otherwise it would remain set e.g.
after _kern_restore_signal_frame() and the next syscall would look like
one returning a 64 bit value.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23918 a95241bf-73f2-0310-859d-f6bbb57e9c96
function has the old behavior. When false, it just calls the scheduler
without any priority adjustment or other stuff.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23906 a95241bf-73f2-0310-859d-f6bbb57e9c96
Not sure if this is the right place, Ingo might want to review that one.
* This fixes unmounting sessions of a multi-session CD, ie. the BeOS CD (it currently panics
when trying to access a device that's not there anymore - for debugging only, of course :-)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23894 a95241bf-73f2-0310-859d-f6bbb57e9c96
already knows this driver.
* This should also allow to have a driver in home/config/add-ons/... overlays
a driver with the same name in system/add-ons/...
* This should also fix bug #1750.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23892 a95241bf-73f2-0310-859d-f6bbb57e9c96
This is not safe when already freed memory is overwritten. But since we also
store the next pointer of the freelist in there, overwriting would break the
freelist and cause a crash in that case. This gives a drastic performance
boost when freelists grow during use and especially when opening and closing
a lot of programs.
* Optimize filling the freed element with 0xdeadbeef by writing 4 bytes at a
time instead of using single byte writes. Works as all our bins have an
element size that is a multiple of four. Put a panic in there just in case
this assumption isn't met for some reason.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23879 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Stumbled upon a possible bug while trying to understand the reuse of large
allocations. The "first" variable was always set to the current index at the
end of the loop, even if it was already set. This should have caused that
the success condition to never be reached.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23866 a95241bf-73f2-0310-859d-f6bbb57e9c96
TraceOutput for output options instead.
* Added "traced" option --difftime. Instead of the absolute system time
it prints the difference time to the previously printed entry.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23864 a95241bf-73f2-0310-859d-f6bbb57e9c96
When specified it desigantes that the interrupt handler should not lock the
vector with a spinlock when executing the installed interrupt handlers. This
is necessary to allow the same interrupt vector to be handled in parallel on
different CPUs. And it is required for the CPU halt to work synchronously when
there is more than one AP CPU. Though the acquire_spinlock() should cause IPIs
to be processed, only this fixed the SMP_MSG_FLAG_SYNC problem for me.
Not locking is safe as long as it is guaranteed that no interrupt handler is
registered or removed while the interrupt handler is running. We can guarantee
this for the SMP interrupt handlers we install in arch_smp_init() as they are
never uninstalled. Probably this flag should be made private though.
Restored the SMP_MSG_FLAG_SYNC when entering the kernel debugger.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23838 a95241bf-73f2-0310-859d-f6bbb57e9c96
(must also compare to BSD; I've looked at their sources, but I might have
missed something).
* Added sys/file.h and the flock() system call.
* common_fcntl() could forget to put back the file descriptor on some error
conditions (I guess we should introduce and use a DescriptorGetter class).
* Cleaned up fcntl.h, moved the BSD extensions S_IREAD and S_IWRITE to
sys/stat.h where they belong, and added the missing S_IEXEC to them.
* Added some more comments.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23836 a95241bf-73f2-0310-859d-f6bbb57e9c96
file position in case an offset was specified.
* Reverted r23828-r23830 in File.cpp: don't fix the symptoms but the cause
of the problem (hey, that has to be in the kernel, right? :))
* Cleanup of File.cpp, removed OpenBeOS namespace.
* Moved user_fd_kernel_ioctl() to the section where it belongs to (that
function should be renamed, though).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23832 a95241bf-73f2-0310-859d-f6bbb57e9c96
to fix bug #1731.
* However, it turns out that depot destruction obviously doesn't work
correctly, at least we keep partial or full slabs around when we're
using them (which causes the code to panic).
* Therefore, I've now disabled depots completely, until I find the time
to really work on that code.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23825 a95241bf-73f2-0310-859d-f6bbb57e9c96
* While this is not a really good idea for a lock with supposedly little
contention, but it'll fix bug #1731. I haven't tested it yet, but will
do so in a minute :-)
* I will need to rework the slab anyway so that it's possible to use it
as a replacement for our heap, and then I'll switch back to a benaphore
again.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23822 a95241bf-73f2-0310-859d-f6bbb57e9c96
_kern_ktrace_output() syscall, since it will be stored in a separate
entry anyway.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23807 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The kernel now opens up to 8 debugger modules (and puts them into an array;
maybe we'll want to switch to a doubly linked list when there is the need).
* Implemented an example debugger module that prints a stack trace of the
current thread when the kernel debugger is entered (not included in the
image).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23794 a95241bf-73f2-0310-859d-f6bbb57e9c96
was a race condition when more than one CPU would enter the debugger at the
same time (or rather before one CPU could stop all the others). We now use the
inDebugger variable to tell if someone is already in the debugger and then
only process inter CPU messages and retry entering the debugger.
Since sending the synchronous broadcast most of the time hung over here with
SMP enabled I removed the synchronous flag and added a simple spin to give the
other CPUs a chance to process the halt request. Added comments that explain
the reasons and a ToDo to revert to synchronous delivery once we fixed the
problem. The kernel debugger is now usable on my quad core.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23751 a95241bf-73f2-0310-859d-f6bbb57e9c96
systems it can easily happen that the thread gets removed from the queue (when
it times out for example) during the time we don't hold the sem lock.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23749 a95241bf-73f2-0310-859d-f6bbb57e9c96
at the end of that loop we guaranteed a crash when this special handling was
triggered.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23748 a95241bf-73f2-0310-859d-f6bbb57e9c96
"teams".
* "team" without arg prints info about the current team.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23746 a95241bf-73f2-0310-859d-f6bbb57e9c96
path. Makes the "teams" output prettier and "team <name>" becomes
usable.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23745 a95241bf-73f2-0310-859d-f6bbb57e9c96
This should be in line with all uses of scheduler_enqueue_in_run_queue() and
simplifies a few places where it is used.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23738 a95241bf-73f2-0310-859d-f6bbb57e9c96
enqueued into the run_queue again. Modified the workaround to a panic in the
scheduler so we notice when something else does the same.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23731 a95241bf-73f2-0310-859d-f6bbb57e9c96