Commit Graph

537 Commits

Author SHA1 Message Date
yamt
48a1e4cf46 pthread_curcpu_np: map LWPCTL_CPU_NONE to 0 so that this works in the case
of _lwp_ctl failure.
2008-01-07 11:51:43 +00:00
ad
b43749fde1 machine/lock.h, not sys/lock.h 2008-01-05 01:37:35 +00:00
ad
622bbc505a - Use pthread__cancelled() in more places.
- pthread_join(): assert that pthread_cond_wait() returns zero.
2007-12-24 16:04:20 +00:00
ad
989565f81d - Fix pthread_rwlock_trywrlock() which was broken.
- Add new functions: pthread_mutex_held_np, mutex_owner_np, rwlock_held_np,
  rwlock_wrheld_np, rwlock_rdheld_np. These match the kernel's locking
  primitives and can be used when porting kernel code to userspace.

- Always create LWPs detached. Do join/exit sync mostly in userland. When
  looped on a dual core box this seems ~30% quicker than using lwp_wait().
  Reduce number of lock acquire/release ops during thread exit.
2007-12-24 14:46:28 +00:00
ad
8f05f9cc26 Update. 2007-12-24 14:30:09 +00:00
yamt
45cbede9e5 document following functions.
pthread_attr_getname_np
	pthread_attr_setname_np
	pthread_getname_np
	pthread_setname_np
2007-12-14 21:51:21 +00:00
ad
5a5d5865cd Remove test of pthread__osrev that is no longer needed. 2007-12-11 03:21:30 +00:00
ad
37132d5d2f Back out previous now that libc/libpthread are initialized first. 2007-12-07 20:36:52 +00:00
ad
a9718d7115 pthread__mutex_lock_slow: avoid entering the waiters list if a race to
acquire the mutex is lost. Removing the current thread from the waiters
list requires at least one syscall.
2007-12-07 01:38:38 +00:00
yamt
f078e05288 pthread__mutex_wakeup: ignore ESRCH from _lwp_unpark.
once we clear pt_sleeponq, the target thread can proceed further
and even do pthread_exit.
2007-12-04 16:56:11 +00:00
yamt
fc51c23a2d remove unnecessary assignments. 2007-12-04 16:08:28 +00:00
wiz
a6e62b1ef7 Use more markup. New sentence, new line. 2007-12-01 19:03:26 +00:00
ad
64ebe1397e Hack around ld.so initializing pthread users before libpthread/libc. 2007-12-01 01:19:31 +00:00
ad
b565a56cfb - On 64-bit platforms 1/2 the default tsd values were garbage. Fix it.
- The lwpctl block is now needed on uniprocessors, for pthread_curcpu_np().
2007-12-01 01:07:34 +00:00
ad
ae87f94d1d Bump libc/libpthread minor for thr_curcpu()/pthread_curcpu_np(). 2007-11-27 21:06:41 +00:00
ad
4084ca7f3f Add thr_curcpu(), pthread_curcpu_np(). 2007-11-27 20:58:26 +00:00
ad
d5e9b90716 Fix a warning with _LP64. 2007-11-27 20:55:03 +00:00
ad
a4c99db9fe Sync with reality, and note that programs must link against the dynamic
libpthread in order to remain compatible with future releases of NetBSD.
2007-11-19 15:53:20 +00:00
ad
8077340e63 Remove the debuglog stuff. ktrace is more useful now. 2007-11-19 15:14:11 +00:00
ad
a448c4f214 int -> ssize_t in a couple of places. 2007-11-19 15:12:18 +00:00
drochner
095b25e7dd Add pthread_equal() to libc stubs; this makes a lot of sense for
threadsafe libraries implementing own locking functions.
Ride on yesterday's minor version bumps.
2007-11-14 19:28:23 +00:00
ad
4be57c5368 Don't try to block if there are already waiters; it doesn't make sense. 2007-11-14 17:20:57 +00:00
ad
b919eb8116 Crank libpthread to 0.8. It now uses _lwp_ctl(), and it's handy to keep
0.7 hanging around for old kernels.
2007-11-13 17:22:51 +00:00
ad
66ac2ffaf2 Mutexes:
- Play scrooge again and chop more cycles off acquire/release.
- Spin while the lock holder is running on another CPU (adaptive mutexes).
- Do non-atomic release.

Threadreg:

- Add the necessary hooks to use a thread register.
- Add the code for i386, using %gs.
- Leave i386 code disabled until xen and COMPAT_NETBSD32 have the changes.
2007-11-13 17:20:08 +00:00
ad
15e9cec117 For PR bin/37347:
- Override __libc_thr_init() instead of using our own constructor.
- Add pthread__getenv() and use instead of getenv(). This is used before
  we are up and running and unfortunatley getenv() takes locks.

Other changes:

- Cache the spinlock vectors in pthread__st. Internal spinlock operations
  now take 1 function call instead of 3 (i386).
- Use pthread__self() internally, not pthread_self().
- Use __attribute__ ((visibility("hidden"))) in some places.
- Kill PTHREAD_MAIN_DEBUG.
2007-11-13 15:57:10 +00:00
ad
9202b10ca9 Cosmetic change. 2007-11-13 01:21:32 +00:00
ad
f63239c2a0 Use _lwp_setname() to pass thread names to the kernel. 2007-11-07 00:55:22 +00:00
ad
84a6749ef2 Note that libpthread_dbg needs to be checked after making changes to
libpthread.
2007-10-16 15:21:54 +00:00
ad
f1b2c1c4c9 ... but preserve the linked list, for the debugger only. 2007-10-16 15:07:02 +00:00
ad
9583eeb248 Replace the global thread list with a red-black tree. From joerg@. 2007-10-16 13:41:18 +00:00
jnemeth
66687b0cb5 SSP doesn't like alloca... 2007-10-13 20:36:43 +00:00
rmind
25e540085b Add cancellation stubs in libpthread for POSIX messages queues and
asynchronous I/O.

OK by <ad>.
2007-10-09 18:18:33 +00:00
skrll
c6deb42c81 Provide PTHREAD__ASM_RASOPS for alpha.
The gcc generated lock try RAS is broken as the store needs to be the last
instruction.
2007-10-08 16:04:43 +00:00
ad
507f1ca139 Compile pthread_getspecific / pthread_setspecific with -fomit-frame-pointer
-falign-functions=32, since these two really get hammered on. To make them
faster needs a threadreg or TLS, unless there is a way to tell gcc that a
library-local (pthread__threadmask) variable does not need to be PIC.
2007-10-04 21:08:35 +00:00
ad
5059087834 Drop PTHREAD__NSPINS back from 1000 to 64. Setting the waiters bits and
preparing to sleep on a mutex are slow in relative terms, so this allows
us to recover from short lock holds without blocking, while not wasting
too much time on longer holds.
2007-10-04 21:04:32 +00:00
ad
a4b475cd22 Fix a thinko. 2007-10-04 01:46:49 +00:00
ad
af8ed8ad89 Kill PTHREAD_SPIN_DEBUG - it's not must use with 1:1. 2007-09-24 13:56:42 +00:00
skrll
d32ed98975 Resurrect the function pointers for lock operations and allow each
architecture to provide asm versions of the RAS operations.

We do this because relying on the compiler to get the RAS right is not
sensible. (It gets alpha wrong and hppa is suboptimal)

Provide asm RAS ops for hppa.

(A slightly different version) reviewed by Andrew Doran.
2007-09-24 12:19:39 +00:00
ad
598943b712 Adjust previous for clarity. 2007-09-21 21:28:11 +00:00
ad
92f04f1f80 pthread__mutex_unlock_slow: always catch up with deferred wakeups, because
pthread_mutex_unlock clears the per-mutex indicator.
2007-09-21 21:09:25 +00:00
ad
cb10afa05c pthread_rwlock_unlock: return EPERM if the caller tries to release a
rwlock that is write held, but not by the caller.
2007-09-21 16:24:45 +00:00
ad
9212747aea pthread_rwlock_unlock
- Allow callers to try and release an unheld rwlock. Just return EPERM
  as mandated by IEEE Std 1003.1.
- Use pthread__atomic_swap_ptr() to set in the new lock value. At this
  point the lock word can't have changed.

pthread__rwlock_wrlock, pthread__rwlock_rdlock:

- Mask out the waiter bits in the lock word before checking to see if
  the current thread is about to lock against itself.
2007-09-21 16:21:54 +00:00
skrll
f9577d3ada Mostly fix the restartable atomic sequences by reversing the sense of the
lock check and test for return value.

At least alpha looks broken me and some are sub-optimal.

"Looks good to me" from ad.
2007-09-17 13:25:59 +00:00
tnn
28d03ee594 sem_post(): pthread__self() is no longer used here. 2007-09-14 09:15:41 +00:00
ad
20e3392edc Add a per-mutex deferred wakeup flag so that threads doing something like
the following do not wake other threads early:

	pthread_mutex_lock(&mutex);
	pthread_cond_broadcast(&cond);
	foo = malloc(100);		/* takes libc mutexes */
	pthread_mutex_unlock(&mutex);
2007-09-13 23:51:47 +00:00
ad
b0efccf4cd Make the new mutexes faster:
- Eliminate mutexattr_private and just set a bit in ptm_owner if the mutex
  is recursive. This forces the slow path to be taken for recursive mutexes.
  Overload an unused field in pthread_mutex_t to record whether or not it's
  an errorcheck mutex.
- Streamline pthread_mutex_lock / pthread_mutex_unlock a bit more. As a
  side effect makes it possible to have assembly stubs for them.
2007-09-11 18:11:29 +00:00
ad
4042b7d22a Put new threads on the tail of pthread__allqueue, for the debugger. 2007-09-11 18:08:10 +00:00
ad
7db40473e2 Fix a dodgy bit of assembly. 2007-09-11 16:07:15 +00:00
ad
ee3459ca7b Fix broken pthread_mutex_trylock(). 2007-09-11 11:30:15 +00:00
ad
baebee83d5 Fix inverted test after merge of nick-csl-alignment. 2007-09-11 10:27:44 +00:00