the option when a pipeline is created that controls the way the exit
status of the pipeline is calculated. Previously it was the state of
the option when the exit status of the pipeline was collected.
This makes no difference at all for foreground pipelines (there is
no way to change the option between starting and completing the
pipeline) but it does for asynchronous (background) pipelines.
This was always the right way to implement it - it was originally
done the other way as I could not find any other shell implemented
this way - they all seemed to do it our previous way, and I could
not see a good reason to be the sole different shell.
However, now I know that ksh93 works as we will now work, and I
am told that if the option is added to the FreeBSD shell (apparently
the code exists, uncommitted) it will be the same.
Save more characters of command in non-interactive jobs, in case of
core dumps and similar (16 effective chars was a few too little).
Arrange for number to increase if command buffer size increases.
Add a paragraph (briefer than previously posted to mailing lists)
to explain that there is no guarantee that the results of a command
substitution will be available before all commands started by the
cmdsub have completed.
Include the original proposed text (much longer) as *roff comments, so
it will at least be available to those who browse the man page sources.
While here, clean up the existing text about command substitutions to
make it a little more accurate (and to advise against using the `` form).
Deal with the new shell internal exit reason EXEXIT in the case of
a shell which has vfork()'d. It takes a peculiar set of circumstances
to get into a situation where this is ever relevant, but it can be
done. See the PR for details.
we had incorrect usage of setstackmark()/popstackmark()
There was an ancient idiom (imported from CSRG in 1993) where code
can do:
setstackmark(&smark); loop until whatever condition {
/* do lots of code */ popstackmark(&smark);
} popstackmark(&smark);
The 1st (inner) popstackmark() resets the stack, conserving memory,
The 2nd one is needed just in case the "whatever condition" was never
true, and the first one was never executed.
This is (was) safe as all popstackmark() did was reset the stack.
That could be done over and over again with no harm.
That is, until 2000 when a fix from FreeBSD for another problem was
imported. That connected all the stack marks as a list (so they can be
located). That caused the problem, as the idiom was not changed, now
there is this list of marks, and popstackmark() was removing an entry.
It rarely (never?) caused any problems as the idiom was rarely used
(the shell used to do loops like above, mostly, without the inner
popstackmark()). Further, the stack mark list is only ever used when
a memory block is realloc'd.
That is, until last weekend - with the recent set of changes.
Part of that copied code from FreeBSD introduced the idiom above
into more functions - functions used much more, and with a greater
possibility of stack marks being set on blocks that are realloc'd
and so cause the problem. In the FreeBSD code, they changed the idiom,
and always do a setstackmark() immediately after the inner popstackmark().
But not for reasons related to a list of stack marks, as in the
intervening period, FreeBSD deleted that, but for another reason.
We do not have their issue, and I did not believe that their
updated idiom was needed (I did some analysis of exactly this issue -
just missed the important part!), and just continued using the old one.
Hence Patrick's core dump....
The solution used here is to split popstackmark() into 2 halves,
popstackmark() continues to do what it has (recently) done,
but is now implemented as a call of (a new func) rststackmark()
which does all the original work of popstackmark - but not removing
the entry from the stack mark list (which remains in popstackmark()).
Then in the idiom above, the inner popstackmark() turns into a call of
rststackmark() so the stack is reset, but the stack mark list is
unchanged. Tail recursion elimination makes this essentially free.
Import a whole set of tree evaluation enhancements from FreeBSD.
With these, before forking, the shell predicts (often) when all it will
have to do after forking (in the parent) is wait for the child and then
exit with the status from the child, and in such a case simply does not
fork, but rather allows the child to take over the parent's role.
This turns out to handle the particular test case from PR bin/48875 in
such a way that it works as hoped, rather than as it did (the delay there
was caused by an extra copy of the shell hanging around waiting for the
background child to complete ... and keeping the command substitution
stdout open, so the "real" parent had to wait in case more output appeared).
As part of doing this, redirection processing for compound commands gets
moved out of evalsubshell() and into a new evalredir(), which allows us
to properly handle errors occurring while performing those redirects,
and not mishandle (as in simply forget) fd's which had been moved out
of the way temporarily.
evaltree() has its degree of recursion reduced by making it loop to
handle the subsequent operation: that is instead of (for any binop
like ';' '&&' (etc)) where it used to
evaltree(node->left);
evaltree(node->right);
return;
it now does (kind of)
next = node;
while ((node = next) != NULL) {
next = NULL;
if (node is a binary op) {
evaltree(node->left);
if appropriate /* if && test for success, etc */
next = node->right;
continue;
}
/* similar for loops, etc */
}
which can be a good saving, as while the left side (now) tends to be
(usually) a simple (or simpleish) command, the right side can be many
commands (in a command sequence like a; b; c; d; ... the node at the
top of the tree will now have "a" as its left node, and the tree for
b; c; d; ... as its right node - until now everything was evaluated
recursively so it made no difference, and the tree was constructed
the other way).
if/while/... statements are done similarly, recurse to evaluate the
condition, then if the (or one of the) body parts is to be evaluated,
set next to that, and loop (previously it recursed).
There is more to do in this area (particularly in the way that case
statements are processed - we can avoid recursion there as well) but
that can wait for another day.
While doing all of this we keep much better track of when the shell is
just going to exit once the current tree is evaluated (with a new
predicate at_eof() to tell us that we have, for sure, reached the end
of the input stream, that is, this shell will, for certain, not be reading
more command input) and use that info to avoid unneeded forks. For that
we also need another new predicate (have_traps()) to determine of there
are any caught traps which might occur - if there are, we need to remain
to (potentially) handle them, so these optimisations will not occur (to
make the issue in PR 48875 appear again, run the same code, but with a
trap set to execute some code when a signal (or EXIT) occurs - note that
the trap must be set in the appropriate level of sub-shell to have this
effect, any caught traps are cleared in a subshell whenever one is created).
There is still work to be done to handle traps properly, whatever
weirdness they do (some of which is related to some of this.)
These changes do not need man page updates, but 48875 does - an update
to sh.1 will be forthcoming once it is decided what it should say...
Once again, all the heavy lifting for this set of changes comes directly
(with thanks) from the FreeBSD shell.
XXX pullup-8 (but not very soon)
Revert the changes that were made 19 May 2016 (principally eval.c 1.125)
and the bug fixes in subsequent days (eval.c 1.126 and 1.127) and also
update some newer code that was added more recently which acted in
accordance with those changes (make that code be as it would have been
if the changes now being reverted had never been made).
While the changes made did solve the problem, in a sense, they were
never correct (see the PR for some discussion) and it had always been
intended that they be reverted. However, in practical sh code, no
issues were reported - until just recently - so nothing was done,
until now...
After this commit, the validate_fn_redirects test case of the sh ATF
test t_redir will fail. In particular, the subtest of that test
case which is described in the source (of the test) as:
This one is the real test for PR bin/48875
will fail.
Alternative changes, not to "fix" the problem in the PR, but to
often avoid it will be coming very soon - after which that ATF
test will succeed again.
XXX pullup-8
previous rev) the two values (node name, and node number) were
arbitrarily printed in different formats and orders (depending
upon my mood at the time I guess...) The new macros will standardise
that usage (in the debug output) once some use of them actually begins.
When the macros were added, I arbitrarily copied the format of one
use I was looking at at that instant (the one which inspired the change),
but after gazing at DEBUG mode output over the intervening time, I
have concluded that I did not pick the easiest to read/follow format.
So, even before they are used, change the style... Also, conform
to standard PRIxxxx macro style by omitting the leading '%'.
NFC (since they aren't used at all, anywhere, yet, not even the
possibility of anything changing!)
This generates nodenames.h which is a file that used to begin
#ifdef DEBUG
(line 1) and end with
#endif
(last line) with no intervening (matching) #else ... ie: for DEBUG use only.
That led to situations where non-debug code would like to make use
of the info provided, if DEBUG was enabled, needed to add #ifdef DEBUG
at the point of use.
Avoid that by providing new macros that are always defined (DEBUG or not,
so now we have a #else) which allow code to be written to make use of
the extra DEBUG info, if it is available, or not, if not.
While here, add double-include protection on the generated .h file
(just being cautious - nothing is ever going to cause it to get
included anywhere twice - or it shouldn't) and add the traditional
comments on the #else and #endif stuff (which is also really useless
as no-one is really expected to ever read the generated file). Never mind.
Nothing yet (elsewhere in the sh source) uses the new macros, so there's
even less chance of this changing anything than there would otherwise be.
Fix "command not found" handling so that the error message
goes to stderr (after any redirections are applied).
More importantly, in
foo > /tmp/junk
/tmp/junk should be created, before any attempt is made
to execute (the assumed non-existing) "foo".
All this was always true for any command (not found command)
containing a / in its name
foo/bar >/tmp/junk 2>>/tmp/errs
would have created /tmp/junk, then complained (in /tmp/errs)
about foo/bar not being found. Now that happens for ordinary
commands as well.
The fix (which I found when I saw differences between our
code and FreeBSD's, where, for the benefit of PR 42184,
this has been fixed, sometime in the past 9 years) is
frighteningly simple. Simply do not short circuit execution
(or print any error) when the initial lookup fails to
find the command - it will fail anyway when we actually
try running it. The cost is a (seemingly unnecessary,
except that it really is) fork in this case.
This is what I had been planning, but I expected it would
be much more difficult than it turned out....
XXX pullup-8
1. Make command -pv (and -pV) work (which is not as easy as the PR
suggests it might be (the "check and cause error" was there because
it did not work, not in order to prevent it from working).
2. Stop -v and -V being both used (that makes no sense).
3. Stop the "type" builtin inheriting the args (-pvV) that "command" has
(which it did, as when -v -or -V is used with command, it and type are
implemented using the same code).
4. make "command -v word" DTRT for sh keywords (was treating them as an error).
5. Require at least one arg for "command -[vV]" or "type" else usage & error.
Strictly this should also apply to "command" and "command -p" (no -v)
but that's handled elsewhere, so perhaps some other time. Perhaps
"command -v" (and -V) should be limited to 1 command name (where "type"
can have many) as in the POSIX definitions, but I don't think that matters.
6. With "command -V alias", (or "type alias" which is the same thing),
(but not "command -v alias") alter the output format, so we get
ll is an alias for: ls -al
instead of the old
ll is an alias for
ls -al
(and note there was a space, for some reason, after "for")
That is, unless the alias value contains any \n characters, in which
case (something approximating) the old multi-line format is retained.
Also note: that if code wants to parse/use the value of an alias, it
should be using the output of "alias name", not command or type.
Note that none of the above affects "command [-p] cmd" (no -v or -V options)
only "command -[vV]" and "type".
Note also that the changes to eval.[ch] are merely to make syspath()
visible in exec.c rather than static in eval.c
Attempt to correctly deal with \ (both when it is a literal,
in appropriate cases, and when it appears as CTLESC when it was
detected as a quoting character during parsing).
In a pattern, in sh, no quoted character can ever be anything other
than a literal character. This is quite different than regular
expressions, and even different than other uses of glob matching,
where shell quoting is not an issue.
In something like
ls ?\*.c
the ? is a meta-character, the * is a literal (it was quoted). This
is nothing new, sh has handled that properly for ever.
But the same happens with
VAR='?\*.c'
and
ls $VAR
which has not always been handled correctly. Of course, in
ls "$VAR"
nothing in VAR is a meta-character (the entire expansion is quoted)
so even the '\' must match literally (or more accurately, no matching
happens - VAR simply contains an "unusual" filename). But if it had
been
ls *"$VAR"
then we would be looking for filenames that end with the literal 5
characters that make up $VAR.
The same kinds of things are requires of matching patterns in case
statements, and sub-strings with the % and # operators in variable
expansions.
While here, the final remnant of the ancient !! pattern matching
hack has been removed (the code that actually implemented it was
long gone, but one small piece remained, not doing any real harm,
but potentially wasting time - if someone gave a pattern which would
once have invoked that hack.)
This is more or less the same patch as provided in the PR
(just 11 years later, so changed a bit) by woods@...
Since there is no known way to actually cause the reported crash,
we may never know if this change actually fixes anything. But
even if it doesn't it certainly cannot hurt.
There is a potential race which could possibly explain the issue
(see commentary in the PR) which is not easy to avoid - if that is
the actual cause, this should provide a defence, if not really a fix.
possibilities that we do not currently handle all that well.
This mostly means (for now) making sure that quoted pattern
magic characters (as well as quoted sh syntax magic chars)
are properly marked, so they remain known as being quoted,
and do not turn into pattern magic. Also, make sure that an
unquoted \ in a pattern always quotes whatever comes next
(which, unlike in regular expressions, includes inside []
matches),
Mostly use number() (no longer implemented using atoi()) when an
unsigned integer is required, but use strtoXXX() when a conversion
is wanted, without the possibility or error (like setting OPTIND
and RANDOM). Always init OPTIND to 1 when sh starts (overriding
anything in environ.)
that is longer than we can handle the same way we treat an unknown
class name (as a valid char class which contains nothing, so never
matches). Previously a "too long" class name invalidated the
class, so [:very-long-name:] would match any of '[' ':' 'v' ...
(note: "very-long-name" is not long enough to trigger this, but you
get the idea!)
However, the name itself has a restricted syntax ([[:***:]] is not a
character class, it is a match for one of a '[' ':' or '*', followed by
a ']') which we did not implement - check the syntax of the name before
treating it as a character class (but we do add '_' to alphanumerics
as legal class name characters).
expansion, case patterrns, etc) do not force '[' to be a member of
every class.
Before this fix, try:
case [ in [[:alpha:]]) echo Huh\?;; esac
XXX pullup-8 (Perhaps -7 as well, though that shell version has
much more relevant bugs than this one.) This bug is not in -6 as
that has no charclass support.
itself, or some other function which is still active.
This was a long known bug (fixed ages ago in the FreeBSD sh) which
hadn't been fixed as in practice, the situation that causes the
problem simply doesn't arise .. ASAN found it in the sh dotcmd
tests which do have this odd "feature" in the way they are written
(but where it never caused a problem, as the tests are so simple
that no mem is ever allocated between when the old version of the
function was deleted, and when it finished executing, so its code
all remained intact, despite having been freed.)
The fix is taken from the FreeBSD sh.
XXX -- pullup-8 (after a while to ensure no other problems arise).
The current implementation of operations - + * / % could cause Undefined
Behavior and in narrow cases (INT64_MIN / -1 and INT64_MIN % -1) SIGFPE
and crash duping core.
Detected with MKSANITIZER enabled for the Undefined Behavior variation:
# eval expr '4611686018427387904 + 4611686018427387904'
/public/src.git/bin/expr/expr.y:315:12: runtime error: signed integer overflow: 4611686018427387904 + 4611686018427387904 cannot be represented in type 'long'
All bin/t_expr ATF tests pass now in a sanitized userland.
Sponsored by <The NetBSD Foundation>
UBSan can detect that during switching a login to root there is unportable
left shift operation:
$ su -
Password:
/public/src.git/bin/ksh/eval.c:598:13: runtime error: left shift of 1073741824 by 1 places cannot be represented in type 'int'
#
Sponsored by <The NetBSD Foundation>
ksh also does some strange things with it, like put it in argument lists.
No functional change intended.
PR bin/53237 ksh: remove register keyword by Nia Alarie
leading whitespace in the value of var (because strtoimax() does)
but did not allow trailing whitespace. The effect is that some
cases where $(( ${var:-0} )) would work do not work without the $
expansion.
Fix that - allow trailing whitespace. However, continue to insist
upon at least one digit (a non-null var that contains nothing but
whitespace is still an error).
Note: posix is not helpful here, it simply requires that the variable
contain "a value that forms a valid integer constant" (with an optional
+ or - sign).
Don't synerr on
${var-anything
more}
The newline in the middle of the var expansion is permitted.
Bug reported by Martijn Dekker from his modernish tests.
XXX pullup-8
and every option to ulimit built-in. The show-or-set text is already
supplied *both* before and after the list. Pedantically repeating it
for each option just adds a lot of visual clutter that gets in the way
of actually using this fragment of the manual page as a quick
reference.
parameters. Use more .Ic and .Ar when defining syntax.
The manual is still rather inconsistent e.g. when referring to
parameters where it randomly uses both $0 and 0 or $@ and @ - but I'm
not shaving that yak at least for now.
- HAVE_GCC=5 is now the default (vs. HAVE_GCC=53 we've been using for
GCC 5.4 and GCC 5.5.)
- remove some more GCC 4.8 code. we don't support GCC 4 here.
- adjust set lists to gcc=5 from gcc=53.
add some basic HAVE_GCC=6 handling (totally unused so far.)
This removes a clash with well-known libc function tsearch(3) from POSIX.
This allows to build ksh against MSan.
The new name might not be perfect, but long term ksh should be switched to
the libc version.
Sponsored by <The NetBSD Foundation>
This removes a clash with well-known libc function tdelete(3) from POSIX.
This allows to build ksh against MSan.
The new name might not be perfect, but long term ksh should be switched to
the libc version.
Sponsored by <The NetBSD Foundation>
LINENO ... this is what resulted (with thanks for the grammar lessons,
and sundry references provided!)
No date (Dd) bump - there is no change of substance here, just (hopefully)
a clearer way of saying the same thing.
rawcpu of type int, is declared inside main(){} and it can be passed as
uninitialized to setpinfo().
The setpinfo() function has a switch checking the value of rawcpu:
if (!rawcpu)
pi[i].pcpu /= 1.0 - exp(ki[i].p_swtime * log_ccpu);
rawcpu is set to 1 with the command line argument "-C".
-C Change the way the CPU percentage is calculated by using a
"raw" CPU calculation that ignores "resident" time (this
normally has no effect).
Bug reproducible with an invocation: "ps u". It hides with "ps uC".
Initialize rawcpu by default to 0, before the getopt(3) machinery.
Detected with MSan running on NetBSD/amd64.
Sponsored by <The NetBSD Foundation>
of uninit'd field also fix a couple more (still harmless) related
technical C usage bugs.
Explaining why these issues were harmless would take too long to include here.
For example, given $(( 08 + 1 )) (or similar) instead of reporting
"expecting end of expression" - the generic error for parse failed,
which happened as this was parsed as $(( 0 8 + 1 )) because the 8
could not be a part of an octal constant, and that expr makes no sense -
instead say "unexpected '8' (out of range) in numeric constant: 08"
which makes the cause of the error more obvious.
NFC for valid expressions, just the error message (and the way the
error is detected).
some other install media, mini-roots, etc.) It is unlikely that
such a shell will be used for much script debugging (and the old -x
still exists of course) and it adds a little bloat, so, zap...
The ancient unused (unrelated) xioctl() function is gone as well
(from all shells).
output to the stderr which existed when the -X option was (last) enabled.
It also enables tracing by enabling -x (and when reset, +X, also resets
the 'x' flag (+x)). Note that it is still -x/+x which actually
enables/disables the trace output. Hence "apparent variant" - what -X
actually does (aside from setting -x) is just to lock the trace output,
rather than having it follow wherever stderr is later redirected.
to I32 P64 systems - keep nextc first, as that's used in macros,
and nleft next, as that's used (and both are updated) in the same macro,
which is used frequently, this increases the chance they're in the
same cache line (unchanged from before). Beyond that it matters less,
so just shuffle a bit to avoid internal padding when pointers are 64 bits.
Note that there are just 3 of these structs (currently), even if there was
to be a memory saving (there probably won't be, trailing padding will eat it)
it would be of the order of 12 or 24 bytes total, so all this really
just panders to my sense of rightness....
Note to anyone who might be tempted, please don't update the struct
initializers to use newer C forms - eventually sh is planned to become
a host tool, and a separable package, so it wants to remain able to be
compiled using older (though at least ansi) compilers that implement only
older C variants.
output includes a single quote (') then see if using double-quotes
to quote it is reasonable (if no chars that are magic in " also appear).
If so, and if the string is not entirely the ' character, then
use " quoting. This avoids some ugly looking results (occasionally).
Also, fix a bug introduced about 20 months ago where null strings
in xtrace output are dropped, instead of made explicit ('').
To observe this, before you get the fix: set -x; echo '' (or similar.)
Move a comment from the wrong place to the right place.
the same order that option flags with a similar property are sorted.
This corresponds with the change made to the sort order of the short
names made in the previous update (1.4).
Right now, this change makes no difference at all, as there are no
long option names that differ only in char case (yet.)
Correct a (relatively harmless) use after free in prompt expansion
processing [detected by asan.]
Relatively harmless: as (while incorrect) the way the data is (was)
used more or less guaranteed that the buffer contents would be
unaltered until well after they are (were) no longer wanted (this
is the expanded prompt string, it is just output (or copied into
libedit internal storage) and forgotten.
This should make no visible difference to anyone (not using asan or
similar.)
XXX pullup -8
-p var: set var to identifier, from arg list, or PID if no job args)
of the job for which status is returned (becomes $? after wait.)
Note: var is unset if the status returned from wait came from wait
itself rather than from some job exiting (so it is now possible to
tell whether 127 means "no such job" or "job did exit(127)", and
whether $? > 128 means "wait was interrupted" or "job was killed
by a signal or did exit(>128)". ($? is too limited to to allow
indicating whether the job died with a signal, or exited with a
status such that it looks like it did...)
via <termios.h> (and document them.) Bump libc minor number for them.
Arrange for "struct winsize" to become visible in <termios.h>
Fix stty(1) so that "cols" is reported as the arg to set number of columns,
and "columns" is the alias, rather than the other way around, as "cols" is
what has been added to POSIX.
This is to conform with updates to be included in 1003.1 issue 8
(whenever that gets published) currently available at:
http://austingroupbugs.net/view.php?id=1053 (see note 3863)
http://austingroupbugs.net/view.php?id=1151 (see note 3856)
process group (-g), the process leader pid (-p) ($! if the job was &'d)
and the job identifier (-j) (the %n that refers to the job) in addition to
(default) the list of all pids in the job (which it has always done).
No change to the (single) "job" arg, which is a specifier of the job:
the process leader pid, or one of the % forms, and defaults to %% (aka %+).
(This is all now documented in sh(1))
Also document the jobs command properly (no change to the command, just
document what it actually is.)
And while here, a whole new section in sh(1) "Job Control". It probably
needs better wording, but this is (perhaps) better than the nothing that
was there before.
Don't delete jobs from the jobs table merely because they finished,
if they are not the job we are waiting upon. (bin/52640 part 1)
In a sub-shell environment, don't allow wait to find jobs from the
parent shell that had already exited (before the sub-shell was
created) and return status for them as if they are our children.
(bin/52640 part 2)
Don't have the "jobs" command also be an implicit "wait" command
in non-interactive shells. (bin/52641)
Use WCONTINUED (when it exists) so we can report on stopped jobs that
"mysteriously" move back to running state without the user issuing
a "bg" command (eg: kill -CONT <pid>) Previously they would keep
being reported as stopped until they exited.
When a job is detected as having changed status just as we're
issuing a "jobs" command (i.e.: the change occurred between the last
prompt and the jobs command being entered) don't report it twice,
once from the status change, and then again in the jobs command
output. Once is enough (keep the jobs output, suppress the other).
Apply some sanity to the way jobs_invalid is processed - ignore it
in getjob() instead of just ignoring it most of the time there, and
instead always check it before calling getjob() in situations where
we can handle only children of the current shell. This allows the
(totally broken) save/clear/restore of jobs_invalid in jobscmd() to
be done away with (previously an error while in the clear state would
have left jobs_invalid incorrectly cleared - shouldn't have mattered
since jobs_invalid => subshell => error causes exit, but better to be safe).
Add/improve the DEBUG more tracing.
XXX pullup -8
code duplication, and reducing the size of /bin/sh by a trivial amount.
NFCI.
This is being done now as there are two other changes forthcoming, both
of which benefit - one would result in even more code duplication without
this, the other might need to alter how this is done, and doing it after this
means there's just one place to change (if required).
actually work (but just happen to, today, and in some cases, even
that trusts to some luck.)
It has been recently pointed out to me that the man page (ie: this
file) doesn't give any real guidance to what is really acceptable,
and what is not.
The CAVEATS section does note that the grammar is ambiguous, but then
just says that test(1) implements what POSIX requires, and refers
readers to the relevant section of the POSIX standard for more details.
That is probably asking too much of the average reader...
So, add some extra information in the CAVEATS with what is defined to work,
and what should be avoided. Not all of the POSIX rules are here, but this
might hopefully help script authors avoid some of the pitfalls.
1. A serious bug introduced 3 1/2 months ago (approx) (rev 1.116) which
broke all but the simple cases of ~ expansions is fixed (amazingly,
given the magnitude of this problem, no-one noticed!)
2. An ancient bug (probably from when ~ expansion was first addedin 1994, and
certainly is in NetBSD-6 vintage shells) where ${UnSeT:-~} (and similar)
does not expand the ~ is fixed (note that ${UnSeT:-~/} does expand,
this should give a clue to the cause of the problem.
3. A fix/change to make the effects of ~ expansions on ${UnSeT:=whatever}
identical to those in UnSeT=whatever In particular, with HOME=/foo
${UnSeT:=~:~} now assigns, and expands to, /foo:/foo rather than ~:~
just as VAR=~:~ assigns /foo:/foo to VAR. Note this is even after the
previous fix (ie: appending a '/' would not change the results here.)
It is hard to call this one a bug fix for certain (though I believe it is)
as many other shells also produce different results for the ${V:=...}
expansions than they do for V=... (though not all the same as we did).
POSIX is not clear about this, expanding ~ after : in VAR=whatever
assignments is clear, whether ${U:=whatever} assignments should be
treated the same way is not stated, one way or the other.
4. Change to make ':' terminate the user name in a ~ expansion in all cases,
not only in assignments. This makes sense, as ':' is one character that
cannot occur in user names, no matter how otherwise weird they become.
bash (incl in posix mode) ksh93 and bosh all act this way, whereas most
other shells (and POSIX) do not. Because this is clearly an extension
to POSIX, do this one only when not in posix mode (not set -o posix).
causes a core dump in some exotic circumstances (when restoring local
variables when a function returns). ("build.sh makewrapper" exposed it.)
This was introduced in 1.63 - not as part of the substance of that
change (addition) but as an unrelated "must be the right thing to do"
cleanup, which wasn't...
This is a legacy interface from 4.4BSD, and it was
introduced to overcome shortcomings of ptrace(2) at that time, which are
no longer relevant (performance). Today /proc/#/ctl offers a narrow
subset of ptrace(2) commands and is not applicable for modern
applications use beyond simplistic tracing scenarios.
This removal will simplify kernel internals. Users will still be able to
use all the other /proc files.
This change won't affect other procfs files neither Linux compat
features within mount_procfs(8). /proc/#/ctl isn't available on Linux.
Remove:
- /proc/#/ctl from mount_procfs(8)
- P_FSTRACE note from the documentation of ps(1)
- /proc/#/ctl and filesystem tracing documentation from mount_procfs(8)
- KAUTH_REQ_PROCESS_PROCFS_CTL documentation from kauth(9)
- source code file miscfs/procfs/procfs_ctl.c
- PFSctl and procfs_doctl() from sys/miscfs/procfs/procfs.h
- KAUTH_REQ_PROCESS_PROCFS_CTL from sys/sys/kauth.h
- PSL_FSTRACE (0x00010000) from sys/sys/proc.h
- P_FSTRACE (0x00010000) from sys/sys/sysctl.h
Reduce code complexity after removal of this functionality.
Update TODO.ptrace accordingly: remove two entries about /proc tracing.
Do not keep legacy notes as comments in the headers about removed
PSL_FSTRACE / P_FSTRACE, as this interface had little number of users
(close or equal to zero).
Proposed on tech-kern@.
All filesystem tracing utility users are encouraged to switch to ptrace(2).
Sponsored by <The NetBSD Foundation>
Implementation largely obtained from FreeBSD, with adaptations to meet the
needs and style of this sh, some updates to agree with the current POSIX spec,
and a few other minor changes.
The POSIX spec for this ( http://austingroupbugs.net/view.php?id=249 )
[see note 2809 for the current proposed text] is yet to be approved,
so might change. It currently leaves several aspects as unspecified,
this implementation handles those as:
Where more than 2 hex digits follow \x this implementation processes the
first two as hex, the following characters are processed as if the \x
sequence was not present. The value obtained from a \nnn octal sequence
is truncated to the low 8 bits (if a bigger value is written, eg: \456.)
Invalid escape sequences are errors. Invalid \u (or \U) code points are
errors if known to be invalid, otherwise can generate a '?' character.
Where any escape sequence generates nul ('\0') that char, and the rest of
the $'...' string is discarded, but anything remaining in the word is
processed, ie: aaa$'bbb\0ccc'ddd produces the same as aaa'bbb'ddd.
Differences from FreeBSD:
FreeBSD allows only exactly 4 or 8 hex digits for \u and \U (as does C,
but the current sh proposal differs.) reeBSD also continues consuming
as many hex digits as exist after \x (permitted by the spec, but insane),
and reject \u0000 as invalid). Some of this is possibly because that
their implementation is based upon an earlier proposal, perhaps note 590 -
though that has been updated several times.
Differences from the current POSIX proposal:
We currently always generate UTF-8 for the \u & \U escapes. We should
generate the equivalent character from the current locale's character set
(and UTF8 only if that is what the current locale uses.)
If anyone would like to correct that, go ahead.
We (and FreeBSD) generate (X & 0x1F) for \cX escapes where we should generate
the appropriate control character (SOH for \cA for example) with whatever
value that has in the current character set. Apart from EBCDIC, which
we do not support, I've never seen a case where they differ, so ...
Avoid mangling history when editing is enabled, and the prompt contains a \n
Also, allow empty input lines into history when they are being appended to
a previous (partial) command (but not when they would just make an empty entry).
For all the gory details, see the PR.
Note nothing here actually makes prompts containing \n work correctly
when editing is enabled, that's a libedit issue, which will be addressed
some other time.
Don't ignore unexpected reserved words after ';'
Don't allow any random token type as a case stmt pattern, only a word.
Those are ancient ash bugs and do not affect correct scripts.
Don't ignore redirects in a case stmt list where the list is nothing but
redirects (if the pattern matches, the redirects should be performed).
That was introduced when a redirect only case stmt list was allowed
(older shells had generated a syntax error.)
Random cleanups/refactoring taken from or inspired by the FreeBSD sh
parser ... use makename() consistently to create a NARG node - we
were using it in a couple of places but most NARG node creation was open
coded. Introduce consumetoken() (from FreeBSD) to handle the fairly
common case where exactly one token type must come next, and we need to
check that, and skip past it when found (or error) and linebreak() (new)
to handle places where optional \n's are permitted.
Both previously open coded.
Simplify list() by removing its second arg, which was only ever used when
handling the end of a `` (old style command substitution). Simply move
the code from inside list() to just after its call in the `` case (from
FreeBSD.)
(operators all come first, then TWORD, then keywords), and switch
from using TIF to define KWDOFFSET to using TWORD (the barrier,
rather than the token that happens to be first after it.)
to cause (when set, which it is not by default) the exit status of a
pipe to be 0 iff all commands in the pipe exited with status 0, and
otherwise, the status of the rightmost command to exit with a non-0
status.
In the doc, while describing this, also reword some of the text about
commands in general, how they are structured, and when they are executed.
by rudolf at eq.cz on tech-userlevel (July 15, 2017.)
Also correct a typo, de-correct some entirely proper English so
the doc remains written in American instead. And note that
interactive mode is set when stdin & stderr are terminals, not
stding and stdout.
Absent other information, the shell should be interactive if reading
from stdin, and stdin and stderr are ttys, not stdin and stdout.
So sayeth the great lord posix.
Silence nuisance testing environments - avoid << of a negative number
(a signed char -- in a hash function, the result is irrelevant, as long
as it is repeatable).
ALso, cause exec failures to always cause the shell to exit with
status 126 or 127, whatever the cause. 127 is intended for lookup
failures (and is used that way), 126 is used for anything else that
goes wrong (as in several other shells.) We no longer use 2 (more easily
confused with an exit status of the command exec'd) for shell exec failures.
the new format. Also #if 0 a function definition that is used nowhere.
While here, change the function of pushfile() slightly - it now sets
the buf pointer in the top (new) input descriptor to NULL, instead of
simply leaving it - code that needs a buffer always (before and after)
must malloc() one and assign it after the call. But code which does not
(which will be reading from a string or similar) now does not have to
explicitly set it to NULL (cleaner interface.) NFC intended (or observed.)
configure script, ie: $(( which is intended to be a sub-shell in a
command substitution, but is an arith subst instead, it needs to be
written $( ( to do as intended. Instead of just blindly carrying on to
find the missing )) somewhere, anywhere, give up as soon as we have seen
an unbalanced ')' that isn't immediately followed by another ')' which
in a valid arith subst it always would be.
While here, there has been a comment in the code for quite a while noting a
difference in the standard between the text descr & grammar when it comes to
the syntax of case statements. Add more comments to explain why parsing it
as we do is in fact definitely the correct way (ie: the grammar wins arguments
like this...).
in prompts when expanded at prompt time, but all available for general use.
Many of the new ones are not available in SMALL shells (they work as normal
if assigned, but the shell does not set or use them - and there is no magic
in a SMALL shell (usually for install media.))
This fallback code wouldn't work anyway.
times(3) is an obsolete interface by getrusage(2) and gettimeofday(2).
In future it will be swiched to more modern interfaces.
No functional change intended.
parsing the way getopt(3) would, if only it could handle the (required)
-signumber and -signame options. This adds two "features" to kill,
-ssigname and -lstatus now work (ie: one word with all of the '-', the
option letter, and its value) and "--" also now works (kill -- -pid1 pid2
will not attempt to send the pid1 signal to pid2, but rather SIGTERM
to the pid1 process group and pid2). It is still the case that (apart
from --) at most 1 option is permitted (-l, -s, -signame, or -signumber.)
Note that we now have an ambiguity, -sname might mean "-s name" or
send the signal "sname" - if one of those turns out to be valid, that
will be accepted, otherwise the error message will indicate that "sname"
is not a valid signal name, not that "name" is not. Keeping the "-s"
and signal name as separate words avoids this issue.
Also caution: should someone be weird enough to define a new signal
name (as in the part after SIG) which is almost the same name as an
existing name that starts with 'S' by adding an extra 'S' prepended
(eg: adding a SIGSSYS) then the ambiguity problem becomes much worse.
In that case "kill -ssys" will be resolved in favour of the "-s"
flag being used (the more modern syntax) and would send a SIGSYS, rather
that a SIGSSYS. So don't do that.
While here, switch to using signalname(3) (bye bye NSIG, et. al.), add
some constipation, and show a little pride in formatting the signal names
for "kill -l" (and in the usage when appropriate -- same routine.) Respect
COLUMNS (POSIX XBD 8.3) as primary specification of the width (terminal width,
not number of columns to print) for kill -l, a very small value for COLUMNS
will cause kill -l output to list signals one per line, a very large
value will cause them all to be listed on one line.) (eg: "COLUMNS=1 kill -l")
TODO: the signal printing for "trap -l" and that for "kill -l"
should be switched to use a common routine (for the sh builtin versions.)
All changes of relevance here are to bin/kill - the (minor) changes to bin/sh
are only to properly expose the builtin version of getenv(3) so the builtin
version of kill can use it (ie: make its prototype available.)
caused by incorrect macro usage (ie: using the wrong one) which has
been in the sources since version 1.1 (ie: forever).
Like the previous (STACKSTRNUL) bug, the probability of this one
actually occurring has been infinitesimal but the LINENO code increases
that to infinitesimal and a smidgen... (or a few, depending upon usage).
Still, apparently that was enough, Kamil Rytarowski discovered that the
zsh configure script (damn competition!) managed to trigger this problem.