recursive locks (Duh).
Disable cancellation around the cond_wait() call, since that's also a
cancellation point. Arguably, that would be better handled with
pthread_cleanup_*(), but stubbing those for libc is difficult, and the
current non-exception-based implementation of cleanup handlers is
probably no faster than disabling and reenabling cancellation.
Finally, it only happens in the slow path where the thread is going to
sleep anyway...
quite a bit of time, make telnetd ignore it completely now. This results
in the :if=: entry in the default gettytab entry to honored instead of
being ignored. The -h option to telnetd will continue to suppress the
inclusion of :if=:
When disabling cancellation, clear the pt_cancel flag if it was set
and note the cancellation request with PT_FLAG_CS_PENDING. This avoids
a problem where a cancellation request entered but not acted upon before
pthread_setcanclstate(PTHREAD_CANCEL_DISABLE) is called would still be
aceted upon before cancellation was re-enabled.
The new code maintains two variables 'current_spl_level' and
'hardware_spl_level'. Variable hardware_spl_level reflects actual
priority level at the hardware's point of view. hardware_spl_level is
always synchronized to hardware.
splraise() just increases current_spl_level. splx() sets
current_spl_level. If (and only if) hardware_spl_level and
current_spl_level is not same, splx() synchronizes interrupt mask
register and hardware_spl_level to current_spl_level.
In most case, splraise() raises current_spl_level and splx() restores
only current_spl_level.
When an interrupt occurs, hardware_spl_level and interrupt mask
register are synchronized to current_spl_level.
In this implementation, during a higher priority interrupt handler is
running, lower priority interrupts never cause intr_dispatch() to run.
It will avoid some race condition.
cooperating with the callout code in working around the race
condition caused by the TCP code's use of the callout facility.
Instead of unconditionally releasing memory in tcp_close() and
SYN_CACHE_PUT(), check whether any of the related callout handlers
are about to be invoked (but have not yet done callout_ack()), and
if so, just mark the associated data structure (tcpcb or syn cache
entry) as "dead", and test for this (and release storage) in the
callout handler functions.
to make users of the callout facility able to cooperate to work around the
race caused by the callout code lowering interrupt priority level when
invoking callout handlers, something which allows other code to run before
the callout handler gets to it's spl*() call.
This is to enable the workaround for the TCP code found in PR#20390 to be
applied.
This should be backed out once a more comprehensive fix can be put in
place.
a non-zero exit value to indicate a missing file or non-symlink),
instead of test -h $l && ltarg=`ls -ld $l | awk '{print $NF}'`
since the former is quicker and more concise.
and target (and rely upon a non-zero exit value to indicate a missing file),
instead instead of unconditionally installing the link.
SYMLINKS: use stat -qf '%Y' $l to read a symlink's target (and rely upon
a non-zero exit value to indicate a missing file or non-symlink),
instead of test -h $l && ls -ld $l | awk '{print $NF}' , since
the former is quicker and more concise.
This resolves PR toolchain/16885 from David Laight.
revision 1.5
date: 2003/07/18 07:00:47; author: wlemb; state: Exp; lines: +38 -21
Don't ignore grotty's command line options if \X'tty: sgr ...' is
used to change the drawing scheme.
* src/devives/grotty/tty.cpp (bold_flag_option,
underline_flag_option, italic_flag_option, reverse_flag_option,
bold_underline_mode_option): New global variables.
(update_options): New function.
(tty_printer::special): Call update_options.
(main): Don't set xxx_flag but xxx_flag_option, then call
update_options.