Commit Graph

10 Commits

Author SHA1 Message Date
Augustin Cavalier
2c588b031f kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.
Consider this scenario:
 * A userland thread puts its ID into some structure so that it
   can be woken up later, sets its wait_status to initiate the
   begin of the wait, and then calls _user_block_thread.
 * A second thread finishes whatever task the first thread
   intended to wait for, reads the ID almost immediately
   after it was written, and calls _user_unblock_thread.
 * _user_unblock_thread was called so soon that the first
   thread is not yet blocked on the _user_block_thread block,
   but is instead blocked on e.g. the thread's main mutex.
 * The first thread's thread_block() call returns B_OK.
   As in this example it was inside mutex_lock, it thinks
   that it now owns the mutex.
 * But it doesn't own the mutex, and so (until yesterday)
   all sorts of mayhem and then a random crash occurs, or
   (after yesterday) an assert-failure is tripped that
   the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
2019-08-05 22:31:02 -04:00
Augustin Cavalier
513403d420 Revert team and thread changes for COMPAT_MODE (hrev52010 & hrev52011).
This reverts commit c558f9c8fe.
This reverts commit 44f24718b1.
This reverts commit a69cb33030.
This reverts commit 951182620e.

There have been multiple reports that these changes break mounting NTFS partitions
(on all systems, see #14204), and shutting down (on certain systems, see #12405.)
Until they can be fixed, they are being backed out.
2018-06-14 22:25:06 -04:00
Jérôme Duval
951182620e kernel/x86_64: setup a new team in compatibility mode.
* in load_image_internal(), elf32_load_user_image checks whether the binary
format requires the compatibility mode.
* we then set up the flag THREAD_FLAGS_COMPAT_MODE and the address space size.
* the compatibility mode runtime_loader is hardcoded with x86/runtime_loader.
* if needed, the 64-bit flat_args structure is converted in-place to its 32-bit
layout.
* a 32-bit flat_args isn't handled yet (a 32-bit team execs a 64-bit binary).

Change-Id: Ia6a066bde8d1774d85de29b48dc500e27ae9668f
2018-06-12 17:56:55 +02:00
Pawel Dziepak
f06af2e2f8 system: Use B_PAGE_SIZE to define stack sizes
As korli suggested use B_PAGE_SIZE for defining stack size related
definitions what seems to be more natural for them  and also may
help if we ever support an architecture with page size different than
4kB.
2013-09-16 23:23:29 +02:00
Pawel Dziepak
3b4269ecf5 arch: randomize initial user stack pointer
Inside the page randomization of initial user stack pointer is not only a part
of ASLR implementation but also a performance improvement that helps
eliminating aligned 64 kB data access.

Minimal user stack size is increased to 8 kB in order to ensure that regardless
of initial stack pointer value there is still enough space on stack.
2013-04-04 15:16:20 +02:00
Hamish Morrison
d1f280c805 Add support for pthread_attr_get/setguardsize()
* Added the aforementioned functions.
* create_area_etc() now takes a guard size parameter.
* The thread_info::stack_base/end range now refers to the usable range
  only.
2012-12-28 18:02:58 +00:00
Ingo Weinhold
24df65921b Merged signals-merge branch into trunk with the following changes:
* Reorganized the kernel locking related to threads and teams.
* We now discriminate correctly between process and thread signals. Signal
  handlers have been moved to teams. Fixes #5679.
* Implemented real-time signal support, including signal queuing, SA_SIGINFO
  support, sigqueue(), sigwaitinfo(), sigtimedwait(), waitid(), and the addition
  of the real-time signal range. Closes #1935 and #2695.
* Gave SIGBUS a separate signal number. Fixes #6704.
* Implemented <time.h> clock and timer support, and fixed/completed alarm() and
  [set]itimer(). Closes #5682.
* Implemented support for thread cancellation. Closes #5686.
* Moved send_signal() from <signal.h> to <OS.h>. Fixes #7554.
* Lots over smaller more or less related changes.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42116 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-06-12 00:00:23 +00:00
Ingo Weinhold
232fd3bae3 Moved the wait type definitions to <thread_defs.h>. We're going to use
them in userland, too.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27301 a95241bf-73f2-0310-859d-f6bbb57e9c96
2008-09-03 14:48:47 +00:00
Ingo Weinhold
57f2b5a013 * Changed the meaning of the {KERNEL,USER}_STACK_SIZE macros to not
include the guard pages. Adjusted the kernel and boot loader code
  accordingly -- the guard pages size is added/not removed respectively.
  The stack size passed to _kern_spawn_thread() is now the actually usable
  size, and it is no longer possible to specify a size smaller than or
  equal to the guard pages size.
* vm_create_anonymous_area(): Precommit two pages maximum -- a stack with
  only one page usable size obviously doesn't need two pages.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26819 a95241bf-73f2-0310-859d-f6bbb57e9c96
2008-08-05 17:19:46 +00:00
Ingo Weinhold
6b202f4e3d * Introduced new header directory headers/private/system which is supposed
to contain headers shared by kernel and userland (mainly libroot).
* Moved quite a few private kernel headers to the new location. Split
  several kernel headers into a shared part and one that is still kernel
  private. Adjusted all affected Jamfiles and source in the standard x86
  build accordingly. The build for other architectures and for test code
  may be broken.
* Quite a bit of userland code still includes private kernel headers.
  Mostly those are <util/*> headers. The ones that aren't strictly
  kernel-only should be moved to some other place (maybe
  headers/private/shared/util).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25486 a95241bf-73f2-0310-859d-f6bbb57e9c96
2008-05-14 03:55:16 +00:00