2010-04-11 20:07:39 +04:00
|
|
|
/*
|
|
|
|
* Copyright 2010, Ingo Weinhold, ingo_weinhold@gmx.de.
|
|
|
|
* Distributed under the terms of the MIT License.
|
|
|
|
*/
|
|
|
|
#ifndef _SYSTEM_USER_MUTEX_DEFS_H
|
|
|
|
#define _SYSTEM_USER_MUTEX_DEFS_H
|
|
|
|
|
|
|
|
|
user_mutex: Refactor locking and unblocking mechanism.
Suppose the following scenario:
1. Thread A holds a mutex.
2. Thread B goes to acquire the mutex, winds up in kernel waiting.
3. Thread A unlocks; first unsets the LOCKED flag.
As WAITING is set, it calls the kernel; but instead of processing
this immediately, the thread is suspended for any reason (locks,
reschedule, etc.)
4. Thread B hits a timeout, or a signal. It then unblocks in the kernel,
which causes the WAITING flag to be unset.
5. Thread C goes to acquire the lock. It sets the LOCKED flag.
It sees the WAITING flag is not set, so it returns at once,
having successfully acquired the lock.
6. Thread A, suspended back in step 3, resumes.
Now we encounter the problem. Under the previous code, the following
would occur.
7. Thread A sees that no threads are waiting. It thus unsets the LOCKED
flag, and returns from the kernel. Now we have a mutex theoretically
held by thread C but which (illegally) has no LOCKED flag set!
8. Some other thread tries to acquire the lock, and succeeds, for LOCKED
is not set. We now have one lock owned by two separate threads.
That's very bad!
The solution, in this commit, is to (1) switch from using "atomic_or"
to lock mutexes, to using "atomic_test_and_set", and (2) mandate that
_kern_unblock_mutex must be invoked with the mutex already unlocked.
Trying to solve the problem with (2) but without (1) produces other
complications and would overall be more complicated. For instance,
all existing userland code expected that it would set LOCKED, but then
check LOCKED|WAITING. If _kern_mutex_unlock does not unset LOCKED,
then whichever thread sets LOCKED when it was previously unset is
now the mutex's undisputed owner, and if it fails to notice this,
would deadlock.
That could have been solved with extra checks at all lock points, but
then that would mean locks would not be acquired "fairly": it would
be possible for any thread to race with an unlocking thread, and
acquire the lock before the kernel had a chance to wake anyone up.
Given how fast atomics can be, and how slow invoking the kernel is
comparatively, that would probably make our mutexes extremely "unfair."
This would not violate the POSIX specification, but it does seem like
a dangerous choice to make in implementing these APIs.
Linux's "futex" API, which our API bears some similarities to, requires
at least one atomic test-and-set for an uncontended acquisition,
and multiple atomics more for even the simplest case of contended
acquisition. If it works for them, it should work for us, too.
Fixes #18436.
Change-Id: Ib8c28acf04ce03234fe738e41aa0969ca1917540
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6537
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2023-06-07 06:30:15 +03:00
|
|
|
// user mutex specific flags passed to _kern_mutex_unblock()
|
2010-04-11 20:07:39 +04:00
|
|
|
#define B_USER_MUTEX_UNBLOCK_ALL 0x80000000
|
|
|
|
// All threads currently waiting on the mutex will be unblocked. The mutex
|
|
|
|
// state will be locked.
|
|
|
|
|
|
|
|
|
|
|
|
// mutex value flags
|
|
|
|
#define B_USER_MUTEX_LOCKED 0x01
|
|
|
|
#define B_USER_MUTEX_WAITING 0x02
|
|
|
|
#define B_USER_MUTEX_DISABLED 0x04
|
|
|
|
|
|
|
|
|
|
|
|
#endif /* _SYSTEM_USER_MUTEX_DEFS_H */
|