cope with the changes made in the previous revision, in an
attempt to avoid bit rot.
Untested (uncompiled) - though it should work.
NFC: this change doesn't get compiled, let alone used.
(a normal builtin, though those are not genrally an issue for
this problem, or a special builtin that has been prefixed by "command")
make sure that we discard any pending input that might have been
queued up, but not yet processed.
We had the mechanism to fix this from when expansion of PS1 etc
was added (which has a similar problem to deal with) - all taken
from FreeBSD - but did not bother to use it here until now...
This fixes an error detected by newly added ATF tests of the eval
builtin, where
eval 'syntax error
another command'
would go ahead and evaluate "another command" which should not
happen (note: only when there was a \n between the two).
Standard C says that free should be a no-op for a NULL pointer, so
we don't need an extra function to do this.
While here, add an XXX about a wrong sounding comment
kill and sh were merged so that the shell (for trap -l) and
kill (for kill -l) can use the same routine, and site that function
in the shell, rather than in kill (use the code that is in kill as
the basis for that routine). This allows access to sh internals,
and in particular to the posix option, so the builtin kill can
operate in posix mode where the standard requires just a single
character (space of newline) between successive signal names (and
we prefer nicely aligned columns instead)..
In a SMALL shell, use the ancient sh printsignals routine instead,
it is smaller (and very much dumber).
/bin/kill still uses the routine that is in its source, and is
not posix compliant. A task for some other day...
The SYNOPSIS for "readonly -q" cannot have the -q be
optional ... Also harmonise the output appearance with
that of the export command.
wiz: have at it...
readonly -q VAR...
readonly -p VAR...
export -q [-x] VAR...
export -p [-x] VAR...
all available only in !SMALL shells - and while here, limit
"export -x" to full sized shells as well.
Also, do a better job of arg checking and validating of the
export and readonly commands (which is really just one built-in)
and of issuing error messages when something bogus is detected.
Since these commands are special builtin commands, any error
causes shell exit (for non-interactive shells).
var=foo; readonly var=new
now fails.
If var was already set, an attempt to make it readonly, and assign it
a new value at the same time, failed - the readonly flag was set too soon.
Pointed out by Martijn Dekker (thanks).
Also, while here, add a couple of comments.
Implement parameter and arithmetic expansion of $ENV
before using it as the name of a file from which to
read startup commands for the shell. This continues
to happen for all interactive shells, and non-interactive
shells for which the posix option is not set (-o posix).
On any actual error, or if an attempt is made to use
command substitution, then the value of ENV is used
unchanged as the file name.
The expansion complies with POSIX XCU 2.5.3, though that
only requires parameter expansion - arithmetic expansion
is an extension (but for us, it is much easier to do, than
not to do, and it allows some weird stuff, if you're so
inclined....) Note that there is no ~ expansion (use $HOME).
update to mkinit.sh (to 1.10).
Or more correctly, revert & fix - turns out that there was an off by one
(failure to adjust for other changes -- in a value printed by debug mode
trace output).
NFC.
uses printf for simplicity).
This script runs using the build host's shell, and echo, and so must
deal with all of the absurdity that different versions of echo dumb
upon us.
This is the underlying cause of the linux build failure that gson@ reported.
some versions of liux) - DEBUG mode change: Delete a (relatively new)
trace point (temporarily anyway) which mkinit (a script run using the
host's /bin/sh) apparently cannot handle correctly on (some release of)
linux (it is fine with the NetBSD shell).
I don't know which linux version has a shell with this problem
(or whether it is a mkinit issue that only works by fluke on NetBSD)
Problem reported by gson@
were added to sh ... in other shells, setting such a variable
(for most of them) causes it to lose its special properties,
and act the same as any other variable. I had assumed that
was just implementor laziness... I was wrong.
From now on the NetBSD shell will act like the others, and if vars
like HOSTNAME (and SECONDS, etc) are used as variables in a script
or whatever, they will act just like normal variables (and unless
this happens when they have been made local, or as a variable-assignment
as a prefix to a command, the special properties they would have had
otherwise are lost for the remainder of the life of the (sub-)shell
in which the variables were set).
Importing a value from the environment counts as setting the
value for this purpose (so if HOSTNAME is set in the environment,
the value there will be the value $HOSTNAME expands to).
The two exceptions to this are LINENO and RANDOM. RANDOM
needs to be able to be set to (re-)set its seed. LINENO needs to
be able to be set (at least in the "local" command) to achieve
the desired functionality. It is unlikely that any (sane) script
is going to want to use those two as normal vars however.
While here, fix a minor bug in popping local vars (fn return) that need
to notify the shell of changes in value (like PATH).
Change sh(1) to reflect this alteration. Also add doc of the
(forgotten) magic var EUSER (which has been there since the others
were added), and add a few more vars (which are documented
in other places in sh(1) - like ENV) into the defined or used
variable list (as well as wherever else they appear).
XXX pullup -8
as traps that issue break/continue/return to cause the loop/function
executing when the trap occurred to break/continue/return, and
generating the correct exit code from the shell including when a
signal is caught, but the trap handler for it exits.
All that from FreeBSD.
Also make
T=$(trap)
work as it is supposed to (also trap -p).
For now this is handled by the same technique as $(jobs) - rather
than clearing the traps in subshells, just mark them invalid, and
then whenever they're invalid, clear them before executing anything
other than the special blessed "trap" command. Eventually we will
handle these using non-subshell command substitution instead (not
creating a subshell environ when the commands in a command-sub alter
nothing in the environment).
intended (and as documented) rather than how it has been behaving
(which was not very rational.) Since it is unlikely that anyone
is using this, the change should be mostly invisible.
While here, a couple of other minor cleanups:
. One call of geteuid() is enough in choose_ps1()
. Fix a typo in a comment
. Improve appearance (whitspace changes) in find_var()
to fix the (unusual) idiom "${1+$@}" (the quotes are part of it).
This seems to have broken about 5 or 6 years ago (somewhere
between -6 and -7), I believe.
Note this is not the same as "$@" and also not the same as ${1+"$@"}
(much more common idioms) which both worked.
Also attempt to deal with "" more correctly, especially when it
appears adjacent to "$@" (or one of the similar constructs.)
This stuff is still all as ugly and hackish (and fragile) as is
possible to imagine, but in an effort to allow some of the weirdness
to eventually go away, the parser output has been made more
regular and all quoted (parts of) words always now start with
CTLQUOTEMARK and end with CTLQUOTEEND regardless of where the
quotes appear.
This allows us to tell the difference between """$@" and "$@"
which was impossible before - yet they are required to generate
different output when there are no args (when "$@" simply vanishes).
Needless to say that change had ramifications all over the place.
To simplify any similar change in the future, there are some new
macros that can generally be used to detect the "noise" data when
processing words, rather than open coding that every time (which
meant that there would *always* be one which missed getting
updated...)
Several other bugs (of my making, and older ones) are also fixed.
The aim is that (aside from anything that is detecting the cases
that were broken before - which were all unlikely uses of sh
syntax) these changes should have no external visible impact.
Sure...
to have them, they should work as documented, not cause core
dumps, reference after free, incorrect replacements, failing
to implement alias after alias, ...
The big comment that ended:
This is a good idea ------- ***NOT***
and the hack it was describing are gone.
Note that most of this was from original CVS version 1.1
code (ie: came from the original import, even before 4.4-Lite
was merged. That is, May 1994. And no-one in 24.5 years
noticed (or at least complained about) all the bugs (or at
least, most of them)).
With these changes, aliases ought to work (if you can call it
that) as they are expected to by POSIX. Now if only we could
get POSIX to delete them (or make them optional)...
Changes partly inspired by similar changes made by FreeBSD,
(as was the previous change to alias.c, forgot ack in commit
log for that one, apologies) but done a little differently,
and perhaps with a slightly better outcome.
to the main handler, rather than wherever the parent shell would go.
nb: not needed for vfork(), after vfork() we never go that path - which
is good or we'd be corrupting the parent's handler.
This allows the child to always exit (when it should) rather than being
caught up doing something else (and while it would eventually exit, the
status would be incorrect in some cases).
One test is:
sh -c 'trap "(! :) && echo BUG || echo nobug" EXIT'
from Martijn Dekker
Fix from FreeBSD (missed earlier).
XXX - 2b part of the 48875 pullup to -8
since this code was first imported (May 1994) (ie: before 4.4-Lite)
There is (much) more coming soon (the big ugly comment is going away).
This one has been separated out, as it can easily cause sh
core dumps, so needs:
XXX pullup-8
(the other changes to aliases probably will not get that.)
what it actually does (makearg would have been an alternative).
While here, notice the one remaining place where it should have been
used, but was left open coded, and consume that one.
NFCI.
for the purposes of any redirects it might have -- ie: as posix
requires, make the redirects appear to have been executed in a subshell
environment, so if one fails, aside from a diagnositc msg, all the
running script sees is a command that failed ($? != 0), rather
that having the shell exit which used to happen (the empty command was
being treated as a special builtin).
Continue to treat the empty command as special for the purposes of
var assigns it might contain (those are not executed in a sub-shell
and persist) - an error there (eg: assigning to a readonly var) will
continue to cause the shell (non-interactive shell) to exit.
This makes the NetBSD shell behave like all other (reasonably modern)
shells - fix method (not the implementation, details differ) taken from
FreeBSD who fixed this back in early 2010. Problem pointed out
in (non-list) mail by Martijn Dekker.
first implemented in response to PR bin/4966 (PR Feb 1998, fix Feb 1999).
The file named should not be truncated.
No other shell truncates the file (<> was added to FreeBSD sh in Oct 2000,
and did not include O_TRUNC) and POSIX certainly does not suggest that
should happen (just that the file is to be created if it does not exist.)
Bug pointed out in off-list e-mail by Martijn Dekker
"command" should not be executed. (The issue affects multi-line
eval strings only - ie: commands after the next \n are not skipped).
Bug noted by Martijn Dekker in off-list e-mail.
Fix from FreeBSD:
src/bin/sh/eval.c: Revision 272983 Sun Oct 12 13:12:06 2014 UTC by jilles
to hide meta-characters in the result when the expansion was
in (double) quotes, and so should not be further processed.
Most of this has been OK for a long while, but \ needs hiding
as well, which complicates things, as \ cannot simply be hidden
in the syntax tables as one of the group of random special characters.
This was fixed earlier for simple variable expansions, but
every variety has its own code path ($var uses different code
than $n which is different than $(...), which is different
again from ~ expansions, and also from what $'...' produces).
This could be fixed by moving them all to a common code path,
but that's harder than it seems. The form in which the data
is made available differs, so one common routine would need
a whole bunch of different "get the next char or indicate end"
methods - probably via passing in an accessor function.
That's all a lot of churn, and would probably slow the shell.
Instead, just make macros for doing the standard tests, and
use those instead of open coding (differently) each time.
This way some of the code paths don't end up forgetting to
handle '\' (which is different than all the others).
This removes one optimisation ... when no escaping is needed
(like just $var (unquoted) where magic chars (think '*') in
the value are intended to remain magic), the code avoided doing
two tests for each char ("do we need escapes" and "is this char
one that needs escaping") by choosing two different syntax
tables (choice made outside the loop) - one of which never
returns the magic "needs escaping" result, and the other does
when appropriate, and then just avoiding the "do we need escapes"
test for each character processed. Then when '\' was fixed,
there needed to be another test for it, as it cannot (for other
reasons) be the same as all the others for which "this char
need escaping" is true. So that added a 2nd test for each char...
Not all the code paths were updated. Hence the bugs...
nb: this is all rarely seen in the wild, so it is no big
surprised that no-one ever noticed.
Now the "use two different syntax tables" is gone (the two
returned the same for '\' which is why '\' needed special
processing) - and in order to avoid two tests for each
char (plus the \ test) we duplicate the loops, one of which
tests each char to see if it needs an escape, the 2nd just
copies them. This should be faster in the "no escapes"
code path (though that is not the point) and perhaps also
in the "escapes needed" path (no indirect reference to
the syntax table - though that would probably be in a
register) but makes the code slightly bigger. For /bin/sh
the text segment (on amd64) has grown by 48 bytes. But
it still uses the same number of 512 byte pages (and hence
also any bigger page size). The resulting file size
(/bin/sh) is identical before and after. So is /rescue/sh
(or /rescue/anything-else).
prompts to exit when they're done, rather than forcing them to
turn into interactive shells and start reading input ...
Completes a part of the previous changes (just 10+ weeks late...)
Should fix the prompt expansion issue reported by Caóc on
current-users.
and one in (the included from bin/kill) kill.c and use just
the one in kill.c (which is amended slightly so it can work
the way that trap.c needs it to work). This one is chosen as
it was a much nicer implementation, and because while kill is
always built into the shell, kill also exists without the shell.
Leave the old implementation #if 0'd in trap.c (but updated to
match the calling convention of the one in kill.c) - for now.
Delete references of sys_signame[] from sh/trap.c and along with
that several uses of NSIG (unfortunately, there are still more)
and replace them with the newer libc functional interfaces.
definitions (just to avoid any temptation to ever use them again).
Update a comment which would make no sense without following the
preceding comment which is being deleted with the macros it describes.
While here, remove another comment that referred to events that have
long past as if they were still to come. Also a grammatical comment
correction - paragraphs start with capital letters...
NFC (even with DEBUG defined).
the ancient DEBUG TRACE() method, to the newer CTRACE() et. al.)
that turns out never really needed committing - the mechanism, and
the code that obsoleted it, were committed together (May 2017).
[It was useful to me while getting to that state...]
NFC. Not even with DEBUG shells.
and use whatever works for the sh running this script. Previously
we were using the (broken, and incorrect) method that worked in
old broken NetBSD sh's (and some others) and not the method that
works with the current (fixed) /bin/sh and other correct shells
(like bash). (For an exotic reason, in the particular use case,
both methods work with ksh93, but it is also generally correct).
This hasn't really mattered, as the difference is only significant
(only causes actual issues - the build fails) when compiling with DEBUG
enabled, which is something that most sane humans would never do, if they
want to retain that sanity.
The problem was detected by Patrick Welche when looking for an
unrelated problem, which was once considered to be a possible sh
problem, but turned out to be something entirely different.
XXX pullup -8
(which has redirects, and so is included in -x output) use the -x/+x
setting that existed when the comoound started, so if the state of
xtrace changes during the command we don't end up with just half of
the -x output (either the intro, or the conclusion, depending on
which way the change happened). [this also happens to avoid a core
dump in the previous code, but that could have been done other ways,
this way actually simplifies things (less code)]
Make test(1) always use the POSIX "number of args" evaluation rules
when they apply.
Only fall back to the old expression evaluation when there are more
than 4 args, or when the args given cannot work as a test expression
using the POSIX rules. That is when the result is unspecified.
Also fix old bug where a string of whitespace is considered to be a
valid number (at least one digit is needed amongst it somewhere...)
XXX pullup -8
the option when a pipeline is created that controls the way the exit
status of the pipeline is calculated. Previously it was the state of
the option when the exit status of the pipeline was collected.
This makes no difference at all for foreground pipelines (there is
no way to change the option between starting and completing the
pipeline) but it does for asynchronous (background) pipelines.
This was always the right way to implement it - it was originally
done the other way as I could not find any other shell implemented
this way - they all seemed to do it our previous way, and I could
not see a good reason to be the sole different shell.
However, now I know that ksh93 works as we will now work, and I
am told that if the option is added to the FreeBSD shell (apparently
the code exists, uncommitted) it will be the same.
Save more characters of command in non-interactive jobs, in case of
core dumps and similar (16 effective chars was a few too little).
Arrange for number to increase if command buffer size increases.
Add a paragraph (briefer than previously posted to mailing lists)
to explain that there is no guarantee that the results of a command
substitution will be available before all commands started by the
cmdsub have completed.
Include the original proposed text (much longer) as *roff comments, so
it will at least be available to those who browse the man page sources.
While here, clean up the existing text about command substitutions to
make it a little more accurate (and to advise against using the `` form).
Deal with the new shell internal exit reason EXEXIT in the case of
a shell which has vfork()'d. It takes a peculiar set of circumstances
to get into a situation where this is ever relevant, but it can be
done. See the PR for details.
we had incorrect usage of setstackmark()/popstackmark()
There was an ancient idiom (imported from CSRG in 1993) where code
can do:
setstackmark(&smark); loop until whatever condition {
/* do lots of code */ popstackmark(&smark);
} popstackmark(&smark);
The 1st (inner) popstackmark() resets the stack, conserving memory,
The 2nd one is needed just in case the "whatever condition" was never
true, and the first one was never executed.
This is (was) safe as all popstackmark() did was reset the stack.
That could be done over and over again with no harm.
That is, until 2000 when a fix from FreeBSD for another problem was
imported. That connected all the stack marks as a list (so they can be
located). That caused the problem, as the idiom was not changed, now
there is this list of marks, and popstackmark() was removing an entry.
It rarely (never?) caused any problems as the idiom was rarely used
(the shell used to do loops like above, mostly, without the inner
popstackmark()). Further, the stack mark list is only ever used when
a memory block is realloc'd.
That is, until last weekend - with the recent set of changes.
Part of that copied code from FreeBSD introduced the idiom above
into more functions - functions used much more, and with a greater
possibility of stack marks being set on blocks that are realloc'd
and so cause the problem. In the FreeBSD code, they changed the idiom,
and always do a setstackmark() immediately after the inner popstackmark().
But not for reasons related to a list of stack marks, as in the
intervening period, FreeBSD deleted that, but for another reason.
We do not have their issue, and I did not believe that their
updated idiom was needed (I did some analysis of exactly this issue -
just missed the important part!), and just continued using the old one.
Hence Patrick's core dump....
The solution used here is to split popstackmark() into 2 halves,
popstackmark() continues to do what it has (recently) done,
but is now implemented as a call of (a new func) rststackmark()
which does all the original work of popstackmark - but not removing
the entry from the stack mark list (which remains in popstackmark()).
Then in the idiom above, the inner popstackmark() turns into a call of
rststackmark() so the stack is reset, but the stack mark list is
unchanged. Tail recursion elimination makes this essentially free.
Import a whole set of tree evaluation enhancements from FreeBSD.
With these, before forking, the shell predicts (often) when all it will
have to do after forking (in the parent) is wait for the child and then
exit with the status from the child, and in such a case simply does not
fork, but rather allows the child to take over the parent's role.
This turns out to handle the particular test case from PR bin/48875 in
such a way that it works as hoped, rather than as it did (the delay there
was caused by an extra copy of the shell hanging around waiting for the
background child to complete ... and keeping the command substitution
stdout open, so the "real" parent had to wait in case more output appeared).
As part of doing this, redirection processing for compound commands gets
moved out of evalsubshell() and into a new evalredir(), which allows us
to properly handle errors occurring while performing those redirects,
and not mishandle (as in simply forget) fd's which had been moved out
of the way temporarily.
evaltree() has its degree of recursion reduced by making it loop to
handle the subsequent operation: that is instead of (for any binop
like ';' '&&' (etc)) where it used to
evaltree(node->left);
evaltree(node->right);
return;
it now does (kind of)
next = node;
while ((node = next) != NULL) {
next = NULL;
if (node is a binary op) {
evaltree(node->left);
if appropriate /* if && test for success, etc */
next = node->right;
continue;
}
/* similar for loops, etc */
}
which can be a good saving, as while the left side (now) tends to be
(usually) a simple (or simpleish) command, the right side can be many
commands (in a command sequence like a; b; c; d; ... the node at the
top of the tree will now have "a" as its left node, and the tree for
b; c; d; ... as its right node - until now everything was evaluated
recursively so it made no difference, and the tree was constructed
the other way).
if/while/... statements are done similarly, recurse to evaluate the
condition, then if the (or one of the) body parts is to be evaluated,
set next to that, and loop (previously it recursed).
There is more to do in this area (particularly in the way that case
statements are processed - we can avoid recursion there as well) but
that can wait for another day.
While doing all of this we keep much better track of when the shell is
just going to exit once the current tree is evaluated (with a new
predicate at_eof() to tell us that we have, for sure, reached the end
of the input stream, that is, this shell will, for certain, not be reading
more command input) and use that info to avoid unneeded forks. For that
we also need another new predicate (have_traps()) to determine of there
are any caught traps which might occur - if there are, we need to remain
to (potentially) handle them, so these optimisations will not occur (to
make the issue in PR 48875 appear again, run the same code, but with a
trap set to execute some code when a signal (or EXIT) occurs - note that
the trap must be set in the appropriate level of sub-shell to have this
effect, any caught traps are cleared in a subshell whenever one is created).
There is still work to be done to handle traps properly, whatever
weirdness they do (some of which is related to some of this.)
These changes do not need man page updates, but 48875 does - an update
to sh.1 will be forthcoming once it is decided what it should say...
Once again, all the heavy lifting for this set of changes comes directly
(with thanks) from the FreeBSD shell.
XXX pullup-8 (but not very soon)
Revert the changes that were made 19 May 2016 (principally eval.c 1.125)
and the bug fixes in subsequent days (eval.c 1.126 and 1.127) and also
update some newer code that was added more recently which acted in
accordance with those changes (make that code be as it would have been
if the changes now being reverted had never been made).
While the changes made did solve the problem, in a sense, they were
never correct (see the PR for some discussion) and it had always been
intended that they be reverted. However, in practical sh code, no
issues were reported - until just recently - so nothing was done,
until now...
After this commit, the validate_fn_redirects test case of the sh ATF
test t_redir will fail. In particular, the subtest of that test
case which is described in the source (of the test) as:
This one is the real test for PR bin/48875
will fail.
Alternative changes, not to "fix" the problem in the PR, but to
often avoid it will be coming very soon - after which that ATF
test will succeed again.
XXX pullup-8
previous rev) the two values (node name, and node number) were
arbitrarily printed in different formats and orders (depending
upon my mood at the time I guess...) The new macros will standardise
that usage (in the debug output) once some use of them actually begins.
When the macros were added, I arbitrarily copied the format of one
use I was looking at at that instant (the one which inspired the change),
but after gazing at DEBUG mode output over the intervening time, I
have concluded that I did not pick the easiest to read/follow format.
So, even before they are used, change the style... Also, conform
to standard PRIxxxx macro style by omitting the leading '%'.
NFC (since they aren't used at all, anywhere, yet, not even the
possibility of anything changing!)
This generates nodenames.h which is a file that used to begin
#ifdef DEBUG
(line 1) and end with
#endif
(last line) with no intervening (matching) #else ... ie: for DEBUG use only.
That led to situations where non-debug code would like to make use
of the info provided, if DEBUG was enabled, needed to add #ifdef DEBUG
at the point of use.
Avoid that by providing new macros that are always defined (DEBUG or not,
so now we have a #else) which allow code to be written to make use of
the extra DEBUG info, if it is available, or not, if not.
While here, add double-include protection on the generated .h file
(just being cautious - nothing is ever going to cause it to get
included anywhere twice - or it shouldn't) and add the traditional
comments on the #else and #endif stuff (which is also really useless
as no-one is really expected to ever read the generated file). Never mind.
Nothing yet (elsewhere in the sh source) uses the new macros, so there's
even less chance of this changing anything than there would otherwise be.
Fix "command not found" handling so that the error message
goes to stderr (after any redirections are applied).
More importantly, in
foo > /tmp/junk
/tmp/junk should be created, before any attempt is made
to execute (the assumed non-existing) "foo".
All this was always true for any command (not found command)
containing a / in its name
foo/bar >/tmp/junk 2>>/tmp/errs
would have created /tmp/junk, then complained (in /tmp/errs)
about foo/bar not being found. Now that happens for ordinary
commands as well.
The fix (which I found when I saw differences between our
code and FreeBSD's, where, for the benefit of PR 42184,
this has been fixed, sometime in the past 9 years) is
frighteningly simple. Simply do not short circuit execution
(or print any error) when the initial lookup fails to
find the command - it will fail anyway when we actually
try running it. The cost is a (seemingly unnecessary,
except that it really is) fork in this case.
This is what I had been planning, but I expected it would
be much more difficult than it turned out....
XXX pullup-8
1. Make command -pv (and -pV) work (which is not as easy as the PR
suggests it might be (the "check and cause error" was there because
it did not work, not in order to prevent it from working).
2. Stop -v and -V being both used (that makes no sense).
3. Stop the "type" builtin inheriting the args (-pvV) that "command" has
(which it did, as when -v -or -V is used with command, it and type are
implemented using the same code).
4. make "command -v word" DTRT for sh keywords (was treating them as an error).
5. Require at least one arg for "command -[vV]" or "type" else usage & error.
Strictly this should also apply to "command" and "command -p" (no -v)
but that's handled elsewhere, so perhaps some other time. Perhaps
"command -v" (and -V) should be limited to 1 command name (where "type"
can have many) as in the POSIX definitions, but I don't think that matters.
6. With "command -V alias", (or "type alias" which is the same thing),
(but not "command -v alias") alter the output format, so we get
ll is an alias for: ls -al
instead of the old
ll is an alias for
ls -al
(and note there was a space, for some reason, after "for")
That is, unless the alias value contains any \n characters, in which
case (something approximating) the old multi-line format is retained.
Also note: that if code wants to parse/use the value of an alias, it
should be using the output of "alias name", not command or type.
Note that none of the above affects "command [-p] cmd" (no -v or -V options)
only "command -[vV]" and "type".
Note also that the changes to eval.[ch] are merely to make syspath()
visible in exec.c rather than static in eval.c
Attempt to correctly deal with \ (both when it is a literal,
in appropriate cases, and when it appears as CTLESC when it was
detected as a quoting character during parsing).
In a pattern, in sh, no quoted character can ever be anything other
than a literal character. This is quite different than regular
expressions, and even different than other uses of glob matching,
where shell quoting is not an issue.
In something like
ls ?\*.c
the ? is a meta-character, the * is a literal (it was quoted). This
is nothing new, sh has handled that properly for ever.
But the same happens with
VAR='?\*.c'
and
ls $VAR
which has not always been handled correctly. Of course, in
ls "$VAR"
nothing in VAR is a meta-character (the entire expansion is quoted)
so even the '\' must match literally (or more accurately, no matching
happens - VAR simply contains an "unusual" filename). But if it had
been
ls *"$VAR"
then we would be looking for filenames that end with the literal 5
characters that make up $VAR.
The same kinds of things are requires of matching patterns in case
statements, and sub-strings with the % and # operators in variable
expansions.
While here, the final remnant of the ancient !! pattern matching
hack has been removed (the code that actually implemented it was
long gone, but one small piece remained, not doing any real harm,
but potentially wasting time - if someone gave a pattern which would
once have invoked that hack.)
This is more or less the same patch as provided in the PR
(just 11 years later, so changed a bit) by woods@...
Since there is no known way to actually cause the reported crash,
we may never know if this change actually fixes anything. But
even if it doesn't it certainly cannot hurt.
There is a potential race which could possibly explain the issue
(see commentary in the PR) which is not easy to avoid - if that is
the actual cause, this should provide a defence, if not really a fix.
possibilities that we do not currently handle all that well.
This mostly means (for now) making sure that quoted pattern
magic characters (as well as quoted sh syntax magic chars)
are properly marked, so they remain known as being quoted,
and do not turn into pattern magic. Also, make sure that an
unquoted \ in a pattern always quotes whatever comes next
(which, unlike in regular expressions, includes inside []
matches),
Mostly use number() (no longer implemented using atoi()) when an
unsigned integer is required, but use strtoXXX() when a conversion
is wanted, without the possibility or error (like setting OPTIND
and RANDOM). Always init OPTIND to 1 when sh starts (overriding
anything in environ.)
that is longer than we can handle the same way we treat an unknown
class name (as a valid char class which contains nothing, so never
matches). Previously a "too long" class name invalidated the
class, so [:very-long-name:] would match any of '[' ':' 'v' ...
(note: "very-long-name" is not long enough to trigger this, but you
get the idea!)
However, the name itself has a restricted syntax ([[:***:]] is not a
character class, it is a match for one of a '[' ':' or '*', followed by
a ']') which we did not implement - check the syntax of the name before
treating it as a character class (but we do add '_' to alphanumerics
as legal class name characters).
expansion, case patterrns, etc) do not force '[' to be a member of
every class.
Before this fix, try:
case [ in [[:alpha:]]) echo Huh\?;; esac
XXX pullup-8 (Perhaps -7 as well, though that shell version has
much more relevant bugs than this one.) This bug is not in -6 as
that has no charclass support.
itself, or some other function which is still active.
This was a long known bug (fixed ages ago in the FreeBSD sh) which
hadn't been fixed as in practice, the situation that causes the
problem simply doesn't arise .. ASAN found it in the sh dotcmd
tests which do have this odd "feature" in the way they are written
(but where it never caused a problem, as the tests are so simple
that no mem is ever allocated between when the old version of the
function was deleted, and when it finished executing, so its code
all remained intact, despite having been freed.)
The fix is taken from the FreeBSD sh.
XXX -- pullup-8 (after a while to ensure no other problems arise).
The current implementation of operations - + * / % could cause Undefined
Behavior and in narrow cases (INT64_MIN / -1 and INT64_MIN % -1) SIGFPE
and crash duping core.
Detected with MKSANITIZER enabled for the Undefined Behavior variation:
# eval expr '4611686018427387904 + 4611686018427387904'
/public/src.git/bin/expr/expr.y:315:12: runtime error: signed integer overflow: 4611686018427387904 + 4611686018427387904 cannot be represented in type 'long'
All bin/t_expr ATF tests pass now in a sanitized userland.
Sponsored by <The NetBSD Foundation>
UBSan can detect that during switching a login to root there is unportable
left shift operation:
$ su -
Password:
/public/src.git/bin/ksh/eval.c:598:13: runtime error: left shift of 1073741824 by 1 places cannot be represented in type 'int'
#
Sponsored by <The NetBSD Foundation>
ksh also does some strange things with it, like put it in argument lists.
No functional change intended.
PR bin/53237 ksh: remove register keyword by Nia Alarie
leading whitespace in the value of var (because strtoimax() does)
but did not allow trailing whitespace. The effect is that some
cases where $(( ${var:-0} )) would work do not work without the $
expansion.
Fix that - allow trailing whitespace. However, continue to insist
upon at least one digit (a non-null var that contains nothing but
whitespace is still an error).
Note: posix is not helpful here, it simply requires that the variable
contain "a value that forms a valid integer constant" (with an optional
+ or - sign).
Don't synerr on
${var-anything
more}
The newline in the middle of the var expansion is permitted.
Bug reported by Martijn Dekker from his modernish tests.
XXX pullup-8
and every option to ulimit built-in. The show-or-set text is already
supplied *both* before and after the list. Pedantically repeating it
for each option just adds a lot of visual clutter that gets in the way
of actually using this fragment of the manual page as a quick
reference.
parameters. Use more .Ic and .Ar when defining syntax.
The manual is still rather inconsistent e.g. when referring to
parameters where it randomly uses both $0 and 0 or $@ and @ - but I'm
not shaving that yak at least for now.
- HAVE_GCC=5 is now the default (vs. HAVE_GCC=53 we've been using for
GCC 5.4 and GCC 5.5.)
- remove some more GCC 4.8 code. we don't support GCC 4 here.
- adjust set lists to gcc=5 from gcc=53.
add some basic HAVE_GCC=6 handling (totally unused so far.)
This removes a clash with well-known libc function tsearch(3) from POSIX.
This allows to build ksh against MSan.
The new name might not be perfect, but long term ksh should be switched to
the libc version.
Sponsored by <The NetBSD Foundation>
This removes a clash with well-known libc function tdelete(3) from POSIX.
This allows to build ksh against MSan.
The new name might not be perfect, but long term ksh should be switched to
the libc version.
Sponsored by <The NetBSD Foundation>
LINENO ... this is what resulted (with thanks for the grammar lessons,
and sundry references provided!)
No date (Dd) bump - there is no change of substance here, just (hopefully)
a clearer way of saying the same thing.
rawcpu of type int, is declared inside main(){} and it can be passed as
uninitialized to setpinfo().
The setpinfo() function has a switch checking the value of rawcpu:
if (!rawcpu)
pi[i].pcpu /= 1.0 - exp(ki[i].p_swtime * log_ccpu);
rawcpu is set to 1 with the command line argument "-C".
-C Change the way the CPU percentage is calculated by using a
"raw" CPU calculation that ignores "resident" time (this
normally has no effect).
Bug reproducible with an invocation: "ps u". It hides with "ps uC".
Initialize rawcpu by default to 0, before the getopt(3) machinery.
Detected with MSan running on NetBSD/amd64.
Sponsored by <The NetBSD Foundation>
of uninit'd field also fix a couple more (still harmless) related
technical C usage bugs.
Explaining why these issues were harmless would take too long to include here.
For example, given $(( 08 + 1 )) (or similar) instead of reporting
"expecting end of expression" - the generic error for parse failed,
which happened as this was parsed as $(( 0 8 + 1 )) because the 8
could not be a part of an octal constant, and that expr makes no sense -
instead say "unexpected '8' (out of range) in numeric constant: 08"
which makes the cause of the error more obvious.
NFC for valid expressions, just the error message (and the way the
error is detected).
some other install media, mini-roots, etc.) It is unlikely that
such a shell will be used for much script debugging (and the old -x
still exists of course) and it adds a little bloat, so, zap...
The ancient unused (unrelated) xioctl() function is gone as well
(from all shells).
output to the stderr which existed when the -X option was (last) enabled.
It also enables tracing by enabling -x (and when reset, +X, also resets
the 'x' flag (+x)). Note that it is still -x/+x which actually
enables/disables the trace output. Hence "apparent variant" - what -X
actually does (aside from setting -x) is just to lock the trace output,
rather than having it follow wherever stderr is later redirected.
to I32 P64 systems - keep nextc first, as that's used in macros,
and nleft next, as that's used (and both are updated) in the same macro,
which is used frequently, this increases the chance they're in the
same cache line (unchanged from before). Beyond that it matters less,
so just shuffle a bit to avoid internal padding when pointers are 64 bits.
Note that there are just 3 of these structs (currently), even if there was
to be a memory saving (there probably won't be, trailing padding will eat it)
it would be of the order of 12 or 24 bytes total, so all this really
just panders to my sense of rightness....
Note to anyone who might be tempted, please don't update the struct
initializers to use newer C forms - eventually sh is planned to become
a host tool, and a separable package, so it wants to remain able to be
compiled using older (though at least ansi) compilers that implement only
older C variants.
output includes a single quote (') then see if using double-quotes
to quote it is reasonable (if no chars that are magic in " also appear).
If so, and if the string is not entirely the ' character, then
use " quoting. This avoids some ugly looking results (occasionally).
Also, fix a bug introduced about 20 months ago where null strings
in xtrace output are dropped, instead of made explicit ('').
To observe this, before you get the fix: set -x; echo '' (or similar.)
Move a comment from the wrong place to the right place.
the same order that option flags with a similar property are sorted.
This corresponds with the change made to the sort order of the short
names made in the previous update (1.4).
Right now, this change makes no difference at all, as there are no
long option names that differ only in char case (yet.)
Correct a (relatively harmless) use after free in prompt expansion
processing [detected by asan.]
Relatively harmless: as (while incorrect) the way the data is (was)
used more or less guaranteed that the buffer contents would be
unaltered until well after they are (were) no longer wanted (this
is the expanded prompt string, it is just output (or copied into
libedit internal storage) and forgotten.
This should make no visible difference to anyone (not using asan or
similar.)
XXX pullup -8
-p var: set var to identifier, from arg list, or PID if no job args)
of the job for which status is returned (becomes $? after wait.)
Note: var is unset if the status returned from wait came from wait
itself rather than from some job exiting (so it is now possible to
tell whether 127 means "no such job" or "job did exit(127)", and
whether $? > 128 means "wait was interrupted" or "job was killed
by a signal or did exit(>128)". ($? is too limited to to allow
indicating whether the job died with a signal, or exited with a
status such that it looks like it did...)
via <termios.h> (and document them.) Bump libc minor number for them.
Arrange for "struct winsize" to become visible in <termios.h>
Fix stty(1) so that "cols" is reported as the arg to set number of columns,
and "columns" is the alias, rather than the other way around, as "cols" is
what has been added to POSIX.
This is to conform with updates to be included in 1003.1 issue 8
(whenever that gets published) currently available at:
http://austingroupbugs.net/view.php?id=1053 (see note 3863)
http://austingroupbugs.net/view.php?id=1151 (see note 3856)
process group (-g), the process leader pid (-p) ($! if the job was &'d)
and the job identifier (-j) (the %n that refers to the job) in addition to
(default) the list of all pids in the job (which it has always done).
No change to the (single) "job" arg, which is a specifier of the job:
the process leader pid, or one of the % forms, and defaults to %% (aka %+).
(This is all now documented in sh(1))
Also document the jobs command properly (no change to the command, just
document what it actually is.)
And while here, a whole new section in sh(1) "Job Control". It probably
needs better wording, but this is (perhaps) better than the nothing that
was there before.
Don't delete jobs from the jobs table merely because they finished,
if they are not the job we are waiting upon. (bin/52640 part 1)
In a sub-shell environment, don't allow wait to find jobs from the
parent shell that had already exited (before the sub-shell was
created) and return status for them as if they are our children.
(bin/52640 part 2)
Don't have the "jobs" command also be an implicit "wait" command
in non-interactive shells. (bin/52641)
Use WCONTINUED (when it exists) so we can report on stopped jobs that
"mysteriously" move back to running state without the user issuing
a "bg" command (eg: kill -CONT <pid>) Previously they would keep
being reported as stopped until they exited.
When a job is detected as having changed status just as we're
issuing a "jobs" command (i.e.: the change occurred between the last
prompt and the jobs command being entered) don't report it twice,
once from the status change, and then again in the jobs command
output. Once is enough (keep the jobs output, suppress the other).
Apply some sanity to the way jobs_invalid is processed - ignore it
in getjob() instead of just ignoring it most of the time there, and
instead always check it before calling getjob() in situations where
we can handle only children of the current shell. This allows the
(totally broken) save/clear/restore of jobs_invalid in jobscmd() to
be done away with (previously an error while in the clear state would
have left jobs_invalid incorrectly cleared - shouldn't have mattered
since jobs_invalid => subshell => error causes exit, but better to be safe).
Add/improve the DEBUG more tracing.
XXX pullup -8
code duplication, and reducing the size of /bin/sh by a trivial amount.
NFCI.
This is being done now as there are two other changes forthcoming, both
of which benefit - one would result in even more code duplication without
this, the other might need to alter how this is done, and doing it after this
means there's just one place to change (if required).
actually work (but just happen to, today, and in some cases, even
that trusts to some luck.)
It has been recently pointed out to me that the man page (ie: this
file) doesn't give any real guidance to what is really acceptable,
and what is not.
The CAVEATS section does note that the grammar is ambiguous, but then
just says that test(1) implements what POSIX requires, and refers
readers to the relevant section of the POSIX standard for more details.
That is probably asking too much of the average reader...
So, add some extra information in the CAVEATS with what is defined to work,
and what should be avoided. Not all of the POSIX rules are here, but this
might hopefully help script authors avoid some of the pitfalls.
1. A serious bug introduced 3 1/2 months ago (approx) (rev 1.116) which
broke all but the simple cases of ~ expansions is fixed (amazingly,
given the magnitude of this problem, no-one noticed!)
2. An ancient bug (probably from when ~ expansion was first addedin 1994, and
certainly is in NetBSD-6 vintage shells) where ${UnSeT:-~} (and similar)
does not expand the ~ is fixed (note that ${UnSeT:-~/} does expand,
this should give a clue to the cause of the problem.
3. A fix/change to make the effects of ~ expansions on ${UnSeT:=whatever}
identical to those in UnSeT=whatever In particular, with HOME=/foo
${UnSeT:=~:~} now assigns, and expands to, /foo:/foo rather than ~:~
just as VAR=~:~ assigns /foo:/foo to VAR. Note this is even after the
previous fix (ie: appending a '/' would not change the results here.)
It is hard to call this one a bug fix for certain (though I believe it is)
as many other shells also produce different results for the ${V:=...}
expansions than they do for V=... (though not all the same as we did).
POSIX is not clear about this, expanding ~ after : in VAR=whatever
assignments is clear, whether ${U:=whatever} assignments should be
treated the same way is not stated, one way or the other.
4. Change to make ':' terminate the user name in a ~ expansion in all cases,
not only in assignments. This makes sense, as ':' is one character that
cannot occur in user names, no matter how otherwise weird they become.
bash (incl in posix mode) ksh93 and bosh all act this way, whereas most
other shells (and POSIX) do not. Because this is clearly an extension
to POSIX, do this one only when not in posix mode (not set -o posix).
causes a core dump in some exotic circumstances (when restoring local
variables when a function returns). ("build.sh makewrapper" exposed it.)
This was introduced in 1.63 - not as part of the substance of that
change (addition) but as an unrelated "must be the right thing to do"
cleanup, which wasn't...
This is a legacy interface from 4.4BSD, and it was
introduced to overcome shortcomings of ptrace(2) at that time, which are
no longer relevant (performance). Today /proc/#/ctl offers a narrow
subset of ptrace(2) commands and is not applicable for modern
applications use beyond simplistic tracing scenarios.
This removal will simplify kernel internals. Users will still be able to
use all the other /proc files.
This change won't affect other procfs files neither Linux compat
features within mount_procfs(8). /proc/#/ctl isn't available on Linux.
Remove:
- /proc/#/ctl from mount_procfs(8)
- P_FSTRACE note from the documentation of ps(1)
- /proc/#/ctl and filesystem tracing documentation from mount_procfs(8)
- KAUTH_REQ_PROCESS_PROCFS_CTL documentation from kauth(9)
- source code file miscfs/procfs/procfs_ctl.c
- PFSctl and procfs_doctl() from sys/miscfs/procfs/procfs.h
- KAUTH_REQ_PROCESS_PROCFS_CTL from sys/sys/kauth.h
- PSL_FSTRACE (0x00010000) from sys/sys/proc.h
- P_FSTRACE (0x00010000) from sys/sys/sysctl.h
Reduce code complexity after removal of this functionality.
Update TODO.ptrace accordingly: remove two entries about /proc tracing.
Do not keep legacy notes as comments in the headers about removed
PSL_FSTRACE / P_FSTRACE, as this interface had little number of users
(close or equal to zero).
Proposed on tech-kern@.
All filesystem tracing utility users are encouraged to switch to ptrace(2).
Sponsored by <The NetBSD Foundation>
Implementation largely obtained from FreeBSD, with adaptations to meet the
needs and style of this sh, some updates to agree with the current POSIX spec,
and a few other minor changes.
The POSIX spec for this ( http://austingroupbugs.net/view.php?id=249 )
[see note 2809 for the current proposed text] is yet to be approved,
so might change. It currently leaves several aspects as unspecified,
this implementation handles those as:
Where more than 2 hex digits follow \x this implementation processes the
first two as hex, the following characters are processed as if the \x
sequence was not present. The value obtained from a \nnn octal sequence
is truncated to the low 8 bits (if a bigger value is written, eg: \456.)
Invalid escape sequences are errors. Invalid \u (or \U) code points are
errors if known to be invalid, otherwise can generate a '?' character.
Where any escape sequence generates nul ('\0') that char, and the rest of
the $'...' string is discarded, but anything remaining in the word is
processed, ie: aaa$'bbb\0ccc'ddd produces the same as aaa'bbb'ddd.
Differences from FreeBSD:
FreeBSD allows only exactly 4 or 8 hex digits for \u and \U (as does C,
but the current sh proposal differs.) reeBSD also continues consuming
as many hex digits as exist after \x (permitted by the spec, but insane),
and reject \u0000 as invalid). Some of this is possibly because that
their implementation is based upon an earlier proposal, perhaps note 590 -
though that has been updated several times.
Differences from the current POSIX proposal:
We currently always generate UTF-8 for the \u & \U escapes. We should
generate the equivalent character from the current locale's character set
(and UTF8 only if that is what the current locale uses.)
If anyone would like to correct that, go ahead.
We (and FreeBSD) generate (X & 0x1F) for \cX escapes where we should generate
the appropriate control character (SOH for \cA for example) with whatever
value that has in the current character set. Apart from EBCDIC, which
we do not support, I've never seen a case where they differ, so ...
Avoid mangling history when editing is enabled, and the prompt contains a \n
Also, allow empty input lines into history when they are being appended to
a previous (partial) command (but not when they would just make an empty entry).
For all the gory details, see the PR.
Note nothing here actually makes prompts containing \n work correctly
when editing is enabled, that's a libedit issue, which will be addressed
some other time.
Don't ignore unexpected reserved words after ';'
Don't allow any random token type as a case stmt pattern, only a word.
Those are ancient ash bugs and do not affect correct scripts.
Don't ignore redirects in a case stmt list where the list is nothing but
redirects (if the pattern matches, the redirects should be performed).
That was introduced when a redirect only case stmt list was allowed
(older shells had generated a syntax error.)
Random cleanups/refactoring taken from or inspired by the FreeBSD sh
parser ... use makename() consistently to create a NARG node - we
were using it in a couple of places but most NARG node creation was open
coded. Introduce consumetoken() (from FreeBSD) to handle the fairly
common case where exactly one token type must come next, and we need to
check that, and skip past it when found (or error) and linebreak() (new)
to handle places where optional \n's are permitted.
Both previously open coded.
Simplify list() by removing its second arg, which was only ever used when
handling the end of a `` (old style command substitution). Simply move
the code from inside list() to just after its call in the `` case (from
FreeBSD.)
(operators all come first, then TWORD, then keywords), and switch
from using TIF to define KWDOFFSET to using TWORD (the barrier,
rather than the token that happens to be first after it.)
to cause (when set, which it is not by default) the exit status of a
pipe to be 0 iff all commands in the pipe exited with status 0, and
otherwise, the status of the rightmost command to exit with a non-0
status.
In the doc, while describing this, also reword some of the text about
commands in general, how they are structured, and when they are executed.