utilizes the THREAD_FLAG_DONT_RESTART_SYSCALL (but only in SIGCONT
for now).
* resume_thread() is now using that flag to be compatible with BeOS.
* This fixes the Terminal hanging on close.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24045 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added team::flags. Currently only used for setting a flag when a team
has exec()ed.
* Some improvements of _user_setpgid():
- It failed incorrectly when the target process was a process group
leader. According to the standard it shall fail when the process is
a session leader. Moving a process group leader to another process
group is fine, even if that leaves the group leaderless.
- Fixed race conditions. We need to recheck the error conditions when
we hold the team spinlock. Otherwise the situation could change
while we allocated the new process group. This was one of the
reasons for bug #1799 -- after the shell fork()'s both parent and
child invoke setpgid() for the child.
- Fixed behavior for pid == pgid. It doesn't necessarily mean that a
new group has to be created.
- Fixed update of target process group orphaned state.
- Squashed TODO: setpgid() on a child is supposed to fail after the
child has exec()ed.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24041 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Right now, only already known loaded drivers will be monitored for changes;
their devices aren't republished, though, since that would cause a deadlock
in the node notification mechanism (listeners are called synchronously);
need to offload that the event handling to another thread.
* On changes of (known) driver directories, the device manager will now print
some info to the syslog.
* Fixed republish_driver() I broke recently (would skip every other node), and
moved it to the driver functions section of the devfs.cpp.
* Implemented currently unused unpublish_driver() function that would have to
be called before reloading a driver.
* If a driver is in use when it's updated, we mark it, but we don't do anything
with that info when we could.
* Minor cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24036 a95241bf-73f2-0310-859d-f6bbb57e9c96
* our flock::l_len was inclusive, while it's exclusive (the last byte locked
is (l_start - 1 + l_len) not just (l_start + l_len).
* F_UNLCK removes all locks of the calling process that are within the specified
region - existing locks might also cut or divided.
* Apparently, a single team can lock the same region as often as it wants.
* advisory_locking is now using a DoublyLinkedList instead of its C counterpart.
* advisory_lock now has start + end fields, instead of offset + len, it's
handier this way.
* This fixes bug #1791.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24020 a95241bf-73f2-0310-859d-f6bbb57e9c96
name of the drivers.
* Allow driver::publish_devices() to return NULL to hint that it has no devices
to publish anymore (ie. existing devices will be unpublished in this case).
* republish_driver() now also calls load_driver() in case the driver is not
loaded.
* publish_device() and unpublish_node() now maintain the new
driver_entry::devices_published field, so we always know how many devices
a driver has now.
* Minor cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24016 a95241bf-73f2-0310-859d-f6bbb57e9c96
size of a reallocated block. If you had kernel heap leak checking on, this
could have caused the first four bytes of the next block to be overwritten
with the size of the reallocation of the previous block.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24011 a95241bf-73f2-0310-859d-f6bbb57e9c96
allocated on the stack, condition variable related structures would be
trashed, causing all kinds of problems. Fixes#1811 and #1812.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24009 a95241bf-73f2-0310-859d-f6bbb57e9c96
This eliminates the edge case where the grow thread would not be able to create
a new area because no memory could be allocated for the allocation of the area.
As this case cannot happen anymore, it is also not possible to deadlock in
memalign. Therefore the timeout (which would only have prevented the deadlock
but wouldn't have solved the edge case anyway) has been removed too.
Add options to dump the dedicated grow heap and to only print the current heap
count to the "heap" debugger command.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23994 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implemented automatic syscall restarts:
- A syscall can indicate that it has been interrupted and can be
restarted by setting a respective bit in thread::flags. It can
store parameters it wants to be preserved for the restart in
thread::syscall_restart::parameters. Another thread::flags bit
indicates whether it has been restarted.
- handle_signals() clears the restart flag, if the handled signal
has a handler function installed and SA_RESTART is not set. Another
thread flag (THREAD_FLAGS_DONT_RESTART_SYSCALL) can prevent syscalls
from being restarted, even if they could be (not used yet, but we
might want to use it in resume_thread(), so that we stay
behaviorally compatible with BeOS).
- The architecture specific syscall handler restarts the syscall, if
the restart flag is set. Implemented for x86 only.
- Added some support functions in the private <syscall_restart.h> to
simplify the syscall restart code in the syscalls.
- Adjusted all syscalls that can potentially be restarted accordingly.
- _user_ioctl() sets new thread flag THREAD_FLAGS_IOCTL_SYSCALL while
calling the underlying FS's/driver's hook, so that syscall restarts
can also be supported there.
* thread_at_kernel_exit() invokes handle_signals() in a loop now, as
long as the latter indicates that the thread shall be suspended, so
that after waking up signals received in the meantime will be handled
before the thread returns to userland. Adjusted handle_signals()
accordingly -- when encountering a suspending signal we don't check
for further signals.
* Fixed sigsuspend(): Suspending the thread and rescheduling doesn't
result in the correct behavior. Instead we employ a temporary
condition variable and interruptably wait on it. The POSIX test
suite test passes, now.
* Made the switch_sem[_etc]() behavior on interruption consistent.
Depending on when the signal arrived (before the call or when already
waiting) the first semaphore would or wouldn't be released. Now we
consistently release it.
* Refactored _user_{read,write}[v]() syscalls. Use a common function for
either pair. The iovec version doesn't fail anymore, if anything could
be read/written at all. It also checks whether a complete vector
could be read/written, so that we won't skip data, if the underlying
FS/driver couldn't read/write more ATM.
* Some refactoring in the x86 syscall handler: The int 99 and sysenter
handlers use a common subroutine to avoid code duplication.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23983 a95241bf-73f2-0310-859d-f6bbb57e9c96
Before starting to wait on a condition variable check for pending
signals first, if the call is interruptable.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23980 a95241bf-73f2-0310-859d-f6bbb57e9c96
The return value of Inode::WaitForRequest() is status_t not bool. So the
method would always fail when it actually succeeded. This affected reads
from pipes which didn't have data. The bug was hidded since VFS code
mostly checks error codes only against < B_OK, so that such a read would
be treated as 0 byte read.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23976 a95241bf-73f2-0310-859d-f6bbb57e9c96
total count of allocations and bytes.
* Also add a few more bin sizes (for 8, 24 and 48 bytes) turns out especially
allocations of 20-24 bytes are pretty common. And as it only wastes a few
bytes per page this doesn't hurt at all.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23961 a95241bf-73f2-0310-859d-f6bbb57e9c96
heap leak check info would otherwise be overwritten for allocations that still
fit the 16 byte bin (i.e. allocations of 0-4 bytes).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23956 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Tracing of allocations, reallocations and frees
* Leak checking infrastructure to dump allocations
The leak checking code records the team and thread id when an allocation is
made as well as stores the originally requested size. It also adds the
"allocations" debugger command that can dump all current allocations (usually
a huge list) or filter by either a team or thread id. This way it's easily
possible to find leftover allocations of no more active teams/threads.
Combined with the tracing support one might be able to track down the time and
reason of an allocation and possibly find the corresponding leak if it is one.
Note that kernel heap leak checking has to be enabled manually by setting the
KERNEL_HEAP_LEAK_CHECK define to 1.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23953 a95241bf-73f2-0310-859d-f6bbb57e9c96
and pages are now kept in lists as well. This allows to return free pages once
a bin does not need them anymore. Partially filled pages are kept in a sorted
linked list so that allocation will always happen on the fullest page - this
favours having full pages and makes it more likely lightly used pages will get
completely empty so they can be returned. Generally this now goes more in the
direction of a slab allocator.
The allocation logic has been extracted, so a heap is now simply attachable to
a region of memory. This allows for multiple heaps and for dynamic growing. In
case the allocator runs out of free pages, an asynchronous growing thread is
notified to create a new area and attach a new heap to it.
By default the kernel heap is now set to 16MB and grows by 8MB each time all
heaps run full.
This should solve quite a few issues, like certain bins just claiming all pages
so that even if there is free space nothing can be allocated. Also it obviously
does aways with filling the heap page by page until it overgrows.
I think this is now a well performing and scalable allocator we can live with
for quite some time. It is well tested under emulation and real hardware and
performs as expected. If problems come up there is an extensive sanity checker
that can be enabled by PARANOID_VALIDATION that covers most aspects of the
allocator. For normal operation this is not necessary though and is therefore
disabled by default.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23939 a95241bf-73f2-0310-859d-f6bbb57e9c96
not use a single static variable to synchronize CPUs at two points. In an
environment where CPUs do not really run concurently (in emulation or with
logical processors) it would be possible for CPUs to get trapped in the first
synchronization while another CPU might just do its thing and change the
sync variable again. These CPUs would then never leave the first loop as the
exit condition has already passed again. The key is to use two different sync
variables like it is done in early kernel initialization. As I didn't manage
to trigger this code though I am not sure if this is gonna work.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23926 a95241bf-73f2-0310-859d-f6bbb57e9c96
using the int 99 syscall method. Otherwise it would remain set e.g.
after _kern_restore_signal_frame() and the next syscall would look like
one returning a 64 bit value.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23918 a95241bf-73f2-0310-859d-f6bbb57e9c96
function has the old behavior. When false, it just calls the scheduler
without any priority adjustment or other stuff.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23906 a95241bf-73f2-0310-859d-f6bbb57e9c96
Not sure if this is the right place, Ingo might want to review that one.
* This fixes unmounting sessions of a multi-session CD, ie. the BeOS CD (it currently panics
when trying to access a device that's not there anymore - for debugging only, of course :-)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23894 a95241bf-73f2-0310-859d-f6bbb57e9c96
already knows this driver.
* This should also allow to have a driver in home/config/add-ons/... overlays
a driver with the same name in system/add-ons/...
* This should also fix bug #1750.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23892 a95241bf-73f2-0310-859d-f6bbb57e9c96
This is not safe when already freed memory is overwritten. But since we also
store the next pointer of the freelist in there, overwriting would break the
freelist and cause a crash in that case. This gives a drastic performance
boost when freelists grow during use and especially when opening and closing
a lot of programs.
* Optimize filling the freed element with 0xdeadbeef by writing 4 bytes at a
time instead of using single byte writes. Works as all our bins have an
element size that is a multiple of four. Put a panic in there just in case
this assumption isn't met for some reason.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23879 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Stumbled upon a possible bug while trying to understand the reuse of large
allocations. The "first" variable was always set to the current index at the
end of the loop, even if it was already set. This should have caused that
the success condition to never be reached.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23866 a95241bf-73f2-0310-859d-f6bbb57e9c96
TraceOutput for output options instead.
* Added "traced" option --difftime. Instead of the absolute system time
it prints the difference time to the previously printed entry.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23864 a95241bf-73f2-0310-859d-f6bbb57e9c96
When specified it desigantes that the interrupt handler should not lock the
vector with a spinlock when executing the installed interrupt handlers. This
is necessary to allow the same interrupt vector to be handled in parallel on
different CPUs. And it is required for the CPU halt to work synchronously when
there is more than one AP CPU. Though the acquire_spinlock() should cause IPIs
to be processed, only this fixed the SMP_MSG_FLAG_SYNC problem for me.
Not locking is safe as long as it is guaranteed that no interrupt handler is
registered or removed while the interrupt handler is running. We can guarantee
this for the SMP interrupt handlers we install in arch_smp_init() as they are
never uninstalled. Probably this flag should be made private though.
Restored the SMP_MSG_FLAG_SYNC when entering the kernel debugger.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23838 a95241bf-73f2-0310-859d-f6bbb57e9c96
(must also compare to BSD; I've looked at their sources, but I might have
missed something).
* Added sys/file.h and the flock() system call.
* common_fcntl() could forget to put back the file descriptor on some error
conditions (I guess we should introduce and use a DescriptorGetter class).
* Cleaned up fcntl.h, moved the BSD extensions S_IREAD and S_IWRITE to
sys/stat.h where they belong, and added the missing S_IEXEC to them.
* Added some more comments.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23836 a95241bf-73f2-0310-859d-f6bbb57e9c96
file position in case an offset was specified.
* Reverted r23828-r23830 in File.cpp: don't fix the symptoms but the cause
of the problem (hey, that has to be in the kernel, right? :))
* Cleanup of File.cpp, removed OpenBeOS namespace.
* Moved user_fd_kernel_ioctl() to the section where it belongs to (that
function should be renamed, though).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23832 a95241bf-73f2-0310-859d-f6bbb57e9c96
to fix bug #1731.
* However, it turns out that depot destruction obviously doesn't work
correctly, at least we keep partial or full slabs around when we're
using them (which causes the code to panic).
* Therefore, I've now disabled depots completely, until I find the time
to really work on that code.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23825 a95241bf-73f2-0310-859d-f6bbb57e9c96
* While this is not a really good idea for a lock with supposedly little
contention, but it'll fix bug #1731. I haven't tested it yet, but will
do so in a minute :-)
* I will need to rework the slab anyway so that it's possible to use it
as a replacement for our heap, and then I'll switch back to a benaphore
again.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23822 a95241bf-73f2-0310-859d-f6bbb57e9c96