Commit Graph

126 Commits

Author SHA1 Message Date
joerg
bfcb2008c8 Fix the stack base pointer for the initial thread on !HPPA.
AT_STACKBASE is pointing to the start of the stack, which is the
upper limit on platforms where the stack grows down.
2012-03-08 16:33:45 +00:00
joerg
a56440951d Separate pthread_t from thread stack. Drop additional alignment
restrictions on the thread stack. Remove remaining parts of stackid.
2012-03-02 18:06:05 +00:00
christos
0f48379f18 put back pthread__dbg variable; this is set to no zero by td_open() when
debugging to avoid multiple td_open() mess.
2011-10-02 18:18:56 +00:00
joerg
67f518f496 Use __dead 2011-09-16 16:05:58 +00:00
joerg
928f301be9 Rework TLS initialisation:
- Update TCB for the initial thread in pthread__initthread, not
  pthread__init to get it valid as soon as possible.
- Don't overwrite the pt_tls field in pthread__initthread.
- Don't deallocate pt_tls in pthread__scrubthread. This worked more by
  chance than by design.
- Handle freeing the TLS area in pthread_create after removing the
  thread instance from the dead queue.
2011-03-30 00:03:26 +00:00
matt
71fdb89287 Use __lwp_gettcb_fast if present. 2011-03-12 07:46:29 +00:00
joerg
aad599979d Add TLS support infrastructure. For dynamic binaries, ld.elf_so exports
_rtld_tls_allocate and _rtld_tls_free. libpthread uses this functions to
setup the thread private area of all new threads. ld.elf_so is
responsible for setting up the private area for the initial thread.
Similar functions are called from _libc_init for static binaries, using
dl_iterate_phdr to access the ELF Program Header.

Add test cases to exercise the different TLS storage models. Test cases
are compiled and installed on all platforms, but are skipped on
platforms not marked for TLS support.

This material is based upon work partially supported by
The NetBSD Foundation under a contract with Joerg Sonnenberger.

It is inspired by the TLS support in FreeBSD by Doug Rabson and the
clean ups of the DragonFly port of the original FreeBSD modifications.
2011-03-09 23:10:05 +00:00
christos
736ce759b3 use pthread__stacksize since size has not been initialized yet. 2010-12-22 22:41:45 +00:00
christos
5cea6e6df8 only mprotect base if we moved it. 2010-12-22 22:19:46 +00:00
christos
66f16a1fa6 I've had this patch in my tree for a while and since it only improves
the situation, I decided to commit it. There is an inherent problem
with ASLR and the way the pthread library is using the thread stack.

Our pthread library chooses that stack for each thread strategically
so that it can locate the location of the pthread struct for each
thread by masking the stack pointer and looking just below the red
zone it creates. Unfortunately with ASLR you get many random values
for the initial stack, and there are situations where the masked
stack base ends up below the base of the stack. (this happens on
x86 when the stack base happens to be 0x???02000 for example and
your mask is stackmask is 0xffe00000). To fix this, we detect the
pathological cases (this happens only in the main thread), allocate
more stack, and mprotect it appropriately. Then we stash the main
base and the main struct, so that when we look for the pthread
struct in pthread__id, we can special case the main thread.

Another way to work around the problem is unlimiting stacksize,
but the proper way is to use TLS to find the thread structure and
not to play games with the thread stacks.
2010-12-18 15:54:27 +00:00
tron
103af04b49 Don't use internal libc function __findenv(). 2010-11-14 22:25:23 +00:00
rmind
0dcf29f92e pthread_create: simplify error path slightly. 2010-07-08 15:13:35 +00:00
explorer
fc70b598c4 fix the pthread pt_lid in the fork callback function that runs in the child instead of a function that may be going away. KNFify 2010-03-25 01:15:00 +00:00
explorer
3f82e012db Correctly set pt_lid in the child, after a fork 2010-03-24 07:27:22 +00:00
christos
85ddadbfdc Don't just look only at the first element in the deadqueue to find lwp's
to reuse, because if we lose the race with the kernel we are never going
to reuse any elements. Look in the whole list instead.
XXX: should be pulled up to 5.x
2009-10-03 23:49:50 +00:00
joerg
cdb510a7bb Restore use of _lwp_makecontext, the AMD64 bug has been fixed. 2009-07-02 09:59:00 +00:00
joerg
35173b1fce Partially revert 1.110: Use makecontext again until the problems with
_lwp_makecontext are solved.
2009-06-25 13:38:43 +00:00
ad
61cac435e4 - Convert from makecontext() -> _lwp_makecontext().
- Rely on _lwp_makecontext() to set up the thread identity register.
  This is not currently done (a bug), nor does libpthread use the
  threadreg yet. I'm doing this so it the code can be used by the
  person working on TLS to verify that their threadreg code is working.
2009-05-17 14:49:00 +00:00
drochner
f1c955a1b2 Fix the comparision function used by the red-black tree global thread list
implementation:
-don't return a difference, this can overflow
-don't try to substract typed pointers which don't belong to the
 same object, this gives undefined results

This fixes instabilities of programs which use more than a handful
of threads, eg spuriously failing pthread_join().
2009-04-01 10:13:24 +00:00
ad
7de9da978b Revert the _lwp_ctl which is causing problem. I did test this locally,
I guess not well enough.
2009-03-30 21:32:51 +00:00
ad
5c670ea686 - Make the threadreg code use _lwp_setprivate() instead of MD hooks.
XXX This must not be enabled by default because the LWP private mechanism
  is reserved for TLS. It is provided only as a test/demo.

  XXX Since ucontext_t does not contain the thread private variable, for a
  short time after threads are created their thread specific data is unset.
  If a signal arrives during that time we are screwed.

- No longer need pthread__osrev.

- Rearrange _lwp_ctl() calls slightly.
2009-03-29 09:30:05 +00:00
ad
beaa63b638 Disable diagnostic assertions by default and just return error codes like
other systems. Allows poorly written applications to appear working. If you
are developing pthread apps please turn it on manually by setting the
environment variable.
2008-10-08 10:03:28 +00:00
matt
c0038aadef Change some type to eliminate some lint warnings. 2008-08-02 16:02:26 +00:00
ad
0e006eeb6f Minor correction to previous. 2008-06-28 10:36:12 +00:00
ad
cbd43ffa55 Now that we have all the scheduling gunk, make these do something useful:
pthread_attr_get_np
pthread_attr_setschedparam
pthread_attr_getschedparam
pthread_attr_setschedpolicy
pthread_attr_getschedpolicy
2008-06-28 10:29:37 +00:00
ad
39a9e71121 pthread_join: explicitly test for cancellation. 2008-06-25 11:06:34 +00:00
ad
2bcb8bf1c4 PR lib/38741 priority inversion in libpthread breaks apps that use
SCHED_FIFO threads

- Change condvar sync so that we never take the condvar's spinlock without
  first holding the caller-provided mutex. Previously, the spinlock was only
  taken without the mutex in an error path, but it was enough to trigger the
  problem described in the PR.

- Even with this change, applications calling pthread_cond_signal/broadcast
  without holding the interlocking mutex are still subject to the problem
  described in the PR. POSIX discourages this saying that it leads to
  undefined scheduling behaviour, which seems good enough for the time being.

- Elsewhere, use a hash of mutexes instead of per-object spinlocks to
  synchronize entry/exit from sleep queues.

- Simplify how sleep queues are maintained.
2008-05-25 17:05:28 +00:00
martin
ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad
783e2f6db5 Back out previous. It seems to expose another bug in libpthread/libc,
potentially errno being used before threading is up and running.
2008-03-22 14:19:27 +00:00
ad
159f554369 Move pthread__errno() into pthread_specific.c so it gets the "no stack
frame" treatment.
2008-03-21 21:35:43 +00:00
ad
eceac52f08 Complain if _lwp_ctl() fails. 2008-03-08 13:23:13 +00:00
christos
c6409540ef add missing static decls. 2008-01-08 20:56:08 +00:00
ad
622bbc505a - Use pthread__cancelled() in more places.
- pthread_join(): assert that pthread_cond_wait() returns zero.
2007-12-24 16:04:20 +00:00
ad
989565f81d - Fix pthread_rwlock_trywrlock() which was broken.
- Add new functions: pthread_mutex_held_np, mutex_owner_np, rwlock_held_np,
  rwlock_wrheld_np, rwlock_rdheld_np. These match the kernel's locking
  primitives and can be used when porting kernel code to userspace.

- Always create LWPs detached. Do join/exit sync mostly in userland. When
  looped on a dual core box this seems ~30% quicker than using lwp_wait().
  Reduce number of lock acquire/release ops during thread exit.
2007-12-24 14:46:28 +00:00
ad
5a5d5865cd Remove test of pthread__osrev that is no longer needed. 2007-12-11 03:21:30 +00:00
yamt
fc51c23a2d remove unnecessary assignments. 2007-12-04 16:08:28 +00:00
ad
b565a56cfb - On 64-bit platforms 1/2 the default tsd values were garbage. Fix it.
- The lwpctl block is now needed on uniprocessors, for pthread_curcpu_np().
2007-12-01 01:07:34 +00:00
ad
8077340e63 Remove the debuglog stuff. ktrace is more useful now. 2007-11-19 15:14:11 +00:00
drochner
095b25e7dd Add pthread_equal() to libc stubs; this makes a lot of sense for
threadsafe libraries implementing own locking functions.
Ride on yesterday's minor version bumps.
2007-11-14 19:28:23 +00:00
ad
66ac2ffaf2 Mutexes:
- Play scrooge again and chop more cycles off acquire/release.
- Spin while the lock holder is running on another CPU (adaptive mutexes).
- Do non-atomic release.

Threadreg:

- Add the necessary hooks to use a thread register.
- Add the code for i386, using %gs.
- Leave i386 code disabled until xen and COMPAT_NETBSD32 have the changes.
2007-11-13 17:20:08 +00:00
ad
15e9cec117 For PR bin/37347:
- Override __libc_thr_init() instead of using our own constructor.
- Add pthread__getenv() and use instead of getenv(). This is used before
  we are up and running and unfortunatley getenv() takes locks.

Other changes:

- Cache the spinlock vectors in pthread__st. Internal spinlock operations
  now take 1 function call instead of 3 (i386).
- Use pthread__self() internally, not pthread_self().
- Use __attribute__ ((visibility("hidden"))) in some places.
- Kill PTHREAD_MAIN_DEBUG.
2007-11-13 15:57:10 +00:00
ad
f63239c2a0 Use _lwp_setname() to pass thread names to the kernel. 2007-11-07 00:55:22 +00:00
ad
f1b2c1c4c9 ... but preserve the linked list, for the debugger only. 2007-10-16 15:07:02 +00:00
ad
9583eeb248 Replace the global thread list with a red-black tree. From joerg@. 2007-10-16 13:41:18 +00:00
ad
4042b7d22a Put new threads on the tail of pthread__allqueue, for the debugger. 2007-09-11 18:08:10 +00:00
ad
f4fd6b797e - Get rid of self->pt_mutexhint and use pthread__mutex_owned() instead.
- Update some comments and fix minor bugs. Minor cosmetic changes.
- Replace some spinlocks with mutexes and rwlocks.
- Change the process private semaphores to use mutexes and condition
  variables instead of doing the synchronization directly. Spinlocks
  are no longer used by the semaphore code.
2007-09-08 22:49:50 +00:00
ad
8ccc6e060d - Don't take the mutex's spinlock (ptr_interlock) in pthread_cond_wait().
Instead, make the deferred wakeup list a per-thread array and pass down
  the lwpid_t's that way.

- In pthread_cond_wait(), take the mutex before dealing with early wakeup.
  In this way there should never be contention on the CV's spinlock if
  the app follows POSIX rules (there should only be contention on the
  user-provided mutex).

- Add a port of the kernel's rwlocks. The rwlock's spinlock is only taken if
  there is contention. This is enabled where atomic ops are available. Right
  now that is only i386 and amd64 because I don't have other hardware to
  test with. It's trivial to add stubs for other architectures as long as
  they have compare-and-swap. When we have proper atomic ops the old rwlock
  code can be removed.

- Add a new mutex implementation that's similar to the kernel's mutexes, but
  uses compare-and-swap to maintain the waiters list, so no spinlocks are
  involved. Same caveats apply as for the rwlocks.
2007-09-07 14:09:27 +00:00
ad
0225b043d2 Acquire the correct lock in pthread_detach(). Spotted by Jan Kryl. 2007-08-23 19:13:23 +00:00
ad
40724da2ba pthread_suspend_np, pthread_resume_np, pthread_detach: return correct code
on error.
2007-08-17 14:28:31 +00:00
ad
d9adedd764 Trim fat off libpthread internal spinlock operations. Makes a mesurable
improvement across the board.
2007-08-16 13:54:16 +00:00