version of the POSIX standard (Issue 8). I believe we were already
compliant with what is to be required, but POSIX is now encouraging
(and will likely require in a later version) that if a tilde expansion
produces a string which ends in a '/' and the '~' that was expanded
is immediately followed by a '/' in the input word, that one of those
two slashes be omitted. The worst (current) example of this is
when HOME=/ and we expand ~/foo - previously producing //foo which is
(in POSIX) a path with implementation defined semantics, and so not
what we should be generating by accident. Change that, so now if
the ~ prefix expansion ends in a '/' and there is a '/' following
immediately after, the resulting word contains only one of those
chars (in the example just given, we will now produce /foo instead).
POSIX is also making it clear that the expansion that results from
the tilde expansion is treated as quoted (not subject to pathname
expansion, or field splitting, or any var/arith/command substitutions)
and that if HOME="" the expansion of ~ must generate "" (not nothing).
Our implementation did all of that already (though older versions
used to treat an empty expansion of HOME the same as if HOME was
unset - that was fixed some time ago).
The actual modification made here is probably smaller than this log entry,
and without added comments, certainly is!
Here we go again... One more time to redo how here docs are
processed (it has been a few years since the last time!)
This is actually a relatively minor change, mostly to timimg
(to just when things happen). Now here docs are expanded at the
same time the "filename" word in a redirect is expanded, rather than
later when the heredoc was being sent to its process. This actually
makes things more consistent - but does break one of the ATF tests
which was testing that we were (effectively) internally inconsistent
in this area.
Not all shells agree on the context in which redirection expansions
should happen, some make any side effects visible to the parent shell
(the majority do) others do the redirection expansions in a subshell
so any side effcts are lost. We used to have a foot in each camp,
with the majority for everything but here docs, and the minority for
here docs. Now we're all the way with LBJ ... (or something like that).
Mostly adding DEBUG mode tracing (when appropriate verbose tracing
is enabled generally) whenever a shell (including sushell) process
exits, so shells that the tracing should indicate why ehslls that
vanish did that.
Note for future investigators: if the relevant tracing is enabled,
and a (sub-)shell still simply seems to have vanished without trace,
the likely cause is that it was killed by a signal - and of those,
the most common that occurs is SIGPIPE.
extra && or || or something ... forgotten now) as part a failed attempt
to fix an earlier bug (later fixed a better way) - when the extra
test (never committed) was removed, the now-redundant parentheses got
forgotten...
NFC.
the output will not be further processed (at all) so there is no need
to escape magic chars in the output, and doing so leaves stray CTLESC
chars in the here doc text. Not good. So don't do that...
To save a strlen() of the result, to determine the size of the here doc,
make rmescapes() return the length of the resulting string (this isn't
needed for other uses, so didn't happen previously).
Reported on current-users@ (2020-02-06) by Jun Ebihara
XXX pullup -9
a bracket expression in a pattern (ie: [[:THISNAME:]]). Previously
the code used strspn() to look for invalid chars in the name, and
then memcpy(), now we do the test and copy a character at a time.
This might, or might not, be faster, but it now correctly handles
\ quoted characters in the name (' and " quoting were already
dealt with, \ was too in an earlier version, but when the \ handling
changes were made, this piece of code broke).
Not exactly a vital bug fix (who writes [[:\alpha:]] or similar?)
but it should work correctly regardless of how obscure the usage is.
Problem noted by Harald van Dijk
XXX pullup -9
Fix handling of "$@" (that is, double quoted dollar at), when it
appears in a string which will be subject to field splitting.
Eg:
${0+"$@" }
More common usages, like the simple "$@" or ${0+"$@"} end up
being entirely quoted, so no field splitting happens, and the
problem was avoided.
See the PR for more details.
This ends up making a bunch of old hack code (and some that was
relatively new) vanish - for now it is just #if 0'd or commented out.
Cleanups of that stuff will happen later.
That some of the worst $@ hacks are now gone does not mean that processing
of "$@" does not retain a very special place in every hackers heart.
RIP extreme ugliness - long live the merely ordinary ugly.
Added a new bin/sh ATF test case to verify that all this remains fixed.
matches the internal CTL* chars.
The earlier fixes handled CTL* char values in var expansions,
but not in various other places they can occur (positional
parameters, $@ $* -- even potentially $0 and ~ expansions,
as well as byte strings generated from a \u in a $'' string).
These should all be correctly handled now. There is a new
ISCTL() macro to make the test, rather than using the old
BASESYNTAX[c]==CCTL form (which us still a viable alternative)
as the new way allows compiler optimisations, and less mem
references, so it should be smaller and faster.
Also, be sure in all cases to remove any CTLESC (or other)
CTL* chars from all strings before they are made available
for any external use (there was one case missed - which didn't
matter when we weren't bothering to escape the CTL* chars at
all.)
XXX pullup-8 (will need to be via a patch) along with the Feb 4 fixes.
fixes) where a variable containing a CTL char (the only possibility used
to be CTLESC (0x81)) would lose that character if the variable was expanded
when "set -f" (noglob) was in effect.
1.128 made this worse by adding more 0x8z values (a couple more) which would
see the same behaviour, and one of those was noticed by Martijn Dekker.
The reasoning was that when noglob is on, when a var is expanded, there are
no magic chars, so (apparently) no need to escape anything. Hence nothing
was escaped .. including any CTL chars that happened to be present. When
we later rmescapes() the CTL chars that we expect might occur are summarily
removed - even if they weren't really CTL chars, but just data masquerading.
We must *always* escape any CTL char clones that are in the var value,
no matter what other conditions apply, and what we expect to happen next.
While here, fix rmescapes() (and its $(()) clone, rmescapes_nl()) to
be more robust, less likely to forget to delete anything (which was
not the issue here, just the reverse) and in a DEBUG shell, have the
shell abort() if it encounters something in rmescapes() it is not
anticipating, so the code can be made to handle it, or if it should
not happen, we can find out why it did.
XXX pullup -8 (but will need to be via patch, code is quite different).
to fix the (unusual) idiom "${1+$@}" (the quotes are part of it).
This seems to have broken about 5 or 6 years ago (somewhere
between -6 and -7), I believe.
Note this is not the same as "$@" and also not the same as ${1+"$@"}
(much more common idioms) which both worked.
Also attempt to deal with "" more correctly, especially when it
appears adjacent to "$@" (or one of the similar constructs.)
This stuff is still all as ugly and hackish (and fragile) as is
possible to imagine, but in an effort to allow some of the weirdness
to eventually go away, the parser output has been made more
regular and all quoted (parts of) words always now start with
CTLQUOTEMARK and end with CTLQUOTEEND regardless of where the
quotes appear.
This allows us to tell the difference between """$@" and "$@"
which was impossible before - yet they are required to generate
different output when there are no args (when "$@" simply vanishes).
Needless to say that change had ramifications all over the place.
To simplify any similar change in the future, there are some new
macros that can generally be used to detect the "noise" data when
processing words, rather than open coding that every time (which
meant that there would *always* be one which missed getting
updated...)
Several other bugs (of my making, and older ones) are also fixed.
The aim is that (aside from anything that is detecting the cases
that were broken before - which were all unlikely uses of sh
syntax) these changes should have no external visible impact.
Sure...
to hide meta-characters in the result when the expansion was
in (double) quotes, and so should not be further processed.
Most of this has been OK for a long while, but \ needs hiding
as well, which complicates things, as \ cannot simply be hidden
in the syntax tables as one of the group of random special characters.
This was fixed earlier for simple variable expansions, but
every variety has its own code path ($var uses different code
than $n which is different than $(...), which is different
again from ~ expansions, and also from what $'...' produces).
This could be fixed by moving them all to a common code path,
but that's harder than it seems. The form in which the data
is made available differs, so one common routine would need
a whole bunch of different "get the next char or indicate end"
methods - probably via passing in an accessor function.
That's all a lot of churn, and would probably slow the shell.
Instead, just make macros for doing the standard tests, and
use those instead of open coding (differently) each time.
This way some of the code paths don't end up forgetting to
handle '\' (which is different than all the others).
This removes one optimisation ... when no escaping is needed
(like just $var (unquoted) where magic chars (think '*') in
the value are intended to remain magic), the code avoided doing
two tests for each char ("do we need escapes" and "is this char
one that needs escaping") by choosing two different syntax
tables (choice made outside the loop) - one of which never
returns the magic "needs escaping" result, and the other does
when appropriate, and then just avoiding the "do we need escapes"
test for each character processed. Then when '\' was fixed,
there needed to be another test for it, as it cannot (for other
reasons) be the same as all the others for which "this char
need escaping" is true. So that added a 2nd test for each char...
Not all the code paths were updated. Hence the bugs...
nb: this is all rarely seen in the wild, so it is no big
surprised that no-one ever noticed.
Now the "use two different syntax tables" is gone (the two
returned the same for '\' which is why '\' needed special
processing) - and in order to avoid two tests for each
char (plus the \ test) we duplicate the loops, one of which
tests each char to see if it needs an escape, the 2nd just
copies them. This should be faster in the "no escapes"
code path (though that is not the point) and perhaps also
in the "escapes needed" path (no indirect reference to
the syntax table - though that would probably be in a
register) but makes the code slightly bigger. For /bin/sh
the text segment (on amd64) has grown by 48 bytes. But
it still uses the same number of 512 byte pages (and hence
also any bigger page size). The resulting file size
(/bin/sh) is identical before and after. So is /rescue/sh
(or /rescue/anything-else).
Attempt to correctly deal with \ (both when it is a literal,
in appropriate cases, and when it appears as CTLESC when it was
detected as a quoting character during parsing).
In a pattern, in sh, no quoted character can ever be anything other
than a literal character. This is quite different than regular
expressions, and even different than other uses of glob matching,
where shell quoting is not an issue.
In something like
ls ?\*.c
the ? is a meta-character, the * is a literal (it was quoted). This
is nothing new, sh has handled that properly for ever.
But the same happens with
VAR='?\*.c'
and
ls $VAR
which has not always been handled correctly. Of course, in
ls "$VAR"
nothing in VAR is a meta-character (the entire expansion is quoted)
so even the '\' must match literally (or more accurately, no matching
happens - VAR simply contains an "unusual" filename). But if it had
been
ls *"$VAR"
then we would be looking for filenames that end with the literal 5
characters that make up $VAR.
The same kinds of things are requires of matching patterns in case
statements, and sub-strings with the % and # operators in variable
expansions.
While here, the final remnant of the ancient !! pattern matching
hack has been removed (the code that actually implemented it was
long gone, but one small piece remained, not doing any real harm,
but potentially wasting time - if someone gave a pattern which would
once have invoked that hack.)
possibilities that we do not currently handle all that well.
This mostly means (for now) making sure that quoted pattern
magic characters (as well as quoted sh syntax magic chars)
are properly marked, so they remain known as being quoted,
and do not turn into pattern magic. Also, make sure that an
unquoted \ in a pattern always quotes whatever comes next
(which, unlike in regular expressions, includes inside []
matches),
that is longer than we can handle the same way we treat an unknown
class name (as a valid char class which contains nothing, so never
matches). Previously a "too long" class name invalidated the
class, so [:very-long-name:] would match any of '[' ':' 'v' ...
(note: "very-long-name" is not long enough to trigger this, but you
get the idea!)
However, the name itself has a restricted syntax ([[:***:]] is not a
character class, it is a match for one of a '[' ':' or '*', followed by
a ']') which we did not implement - check the syntax of the name before
treating it as a character class (but we do add '_' to alphanumerics
as legal class name characters).
expansion, case patterrns, etc) do not force '[' to be a member of
every class.
Before this fix, try:
case [ in [[:alpha:]]) echo Huh\?;; esac
XXX pullup-8 (Perhaps -7 as well, though that shell version has
much more relevant bugs than this one.) This bug is not in -6 as
that has no charclass support.
1. A serious bug introduced 3 1/2 months ago (approx) (rev 1.116) which
broke all but the simple cases of ~ expansions is fixed (amazingly,
given the magnitude of this problem, no-one noticed!)
2. An ancient bug (probably from when ~ expansion was first addedin 1994, and
certainly is in NetBSD-6 vintage shells) where ${UnSeT:-~} (and similar)
does not expand the ~ is fixed (note that ${UnSeT:-~/} does expand,
this should give a clue to the cause of the problem.
3. A fix/change to make the effects of ~ expansions on ${UnSeT:=whatever}
identical to those in UnSeT=whatever In particular, with HOME=/foo
${UnSeT:=~:~} now assigns, and expands to, /foo:/foo rather than ~:~
just as VAR=~:~ assigns /foo:/foo to VAR. Note this is even after the
previous fix (ie: appending a '/' would not change the results here.)
It is hard to call this one a bug fix for certain (though I believe it is)
as many other shells also produce different results for the ${V:=...}
expansions than they do for V=... (though not all the same as we did).
POSIX is not clear about this, expanding ~ after : in VAR=whatever
assignments is clear, whether ${U:=whatever} assignments should be
treated the same way is not stated, one way or the other.
4. Change to make ':' terminate the user name in a ~ expansion in all cases,
not only in assignments. This makes sense, as ':' is one character that
cannot occur in user names, no matter how otherwise weird they become.
bash (incl in posix mode) ksh93 and bosh all act this way, whereas most
other shells (and POSIX) do not. Because this is clearly an extension
to POSIX, do this one only when not in posix mode (not set -o posix).
Implementation largely obtained from FreeBSD, with adaptations to meet the
needs and style of this sh, some updates to agree with the current POSIX spec,
and a few other minor changes.
The POSIX spec for this ( http://austingroupbugs.net/view.php?id=249 )
[see note 2809 for the current proposed text] is yet to be approved,
so might change. It currently leaves several aspects as unspecified,
this implementation handles those as:
Where more than 2 hex digits follow \x this implementation processes the
first two as hex, the following characters are processed as if the \x
sequence was not present. The value obtained from a \nnn octal sequence
is truncated to the low 8 bits (if a bigger value is written, eg: \456.)
Invalid escape sequences are errors. Invalid \u (or \U) code points are
errors if known to be invalid, otherwise can generate a '?' character.
Where any escape sequence generates nul ('\0') that char, and the rest of
the $'...' string is discarded, but anything remaining in the word is
processed, ie: aaa$'bbb\0ccc'ddd produces the same as aaa'bbb'ddd.
Differences from FreeBSD:
FreeBSD allows only exactly 4 or 8 hex digits for \u and \U (as does C,
but the current sh proposal differs.) reeBSD also continues consuming
as many hex digits as exist after \x (permitted by the spec, but insane),
and reject \u0000 as invalid). Some of this is possibly because that
their implementation is based upon an earlier proposal, perhaps note 590 -
though that has been updated several times.
Differences from the current POSIX proposal:
We currently always generate UTF-8 for the \u & \U escapes. We should
generate the equivalent character from the current locale's character set
(and UTF8 only if that is what the current locale uses.)
If anyone would like to correct that, go ahead.
We (and FreeBSD) generate (X & 0x1F) for \cX escapes where we should generate
the appropriate control character (SOH for \cA for example) with whatever
value that has in the current character set. Apart from EBCDIC, which
we do not support, I've never seen a case where they differ, so ...
purpose) in exposing the bug in its implementation, go back to not using
it when not needed for DEBUG TRACE purposes. This change should have no
practical effect on either a DEBUG shell (where the STACKSTRNUL() calls
remain) or a non DEBUG shell where they are not needed.
PR bin/52302 (core dump with interactive shell, here doc and error
on same line) is fixed. (An old bug.)
echo "$( echo x; for a in $( seq 1000 ); do printf '%s\n'; done; echo y )"
consistently prints 1002 lines (x, 1000 empty ones, then y) as it should
(And you don't want to know what it did before, or why.) (Another old one.)
(Recently added) Problems with ~ expansion fixed (mem management related).
Proper fix for the cwrappers configure problem (which includes the quick
fix that was done earlier, but extends upon that to be correct). (This was
another newly added problem.)
And the really devious (and rare) old bug - if STACKSTRNUL() needs to
allocate a new buffer in which to store the \0, calculate the size of
the string space remaining correctly, unlike when SPUTC() grows the
buffer, there is no actual data being stored in the STACKSTRNUL()
case - the string space remaining was calculated as one byte too few.
That would be harmless, unless the next buffer also filled, in which
case it was assumed that it was really full, not one byte less, meaning
one junk char (a nul, or anything) was being copied into the next (even
bigger buffer) corrupting the data.
Consistent use of stalloc() to allocate a new block of (stack) memory,
and grabstackstr() to claim a block of (stack) memory that had already
been occupied but not claimed as in use. Since grabstackstr is implemented
as just a call to stalloc() this is a no-op change in practice, but makes
it much easier to comprehend what is really happening. Previous code
sometimes used stalloc() when the use case was really for grabstackstr().
Change grabstackstr() to actually use the arg passed to it, instead of
(not much better than) guessing how much space to claim,
More care when using unstalloc()/ungrabstackstr() to return space, and in
particular when the stack must be returned to its previous state, rather than
just returning no-longer needed space, neither of those work. They also don't
work properly if there have been (really, even might have been) any stack mem
allocations since the last stalloc()/grabstackstr(). (If we know there
cannot have been then the alloc/release sequence is kind of pointless.)
To work correctly in general we must use setstackmark()/popstackmark() so
do that when needed. Have those also save/restore the top of stack string
space remaining.
[Aside: for those reading this, the "stack" mentioned is not
in any way related to the thing used for maintaining the C
function call state, ie: the "stack segment" of the program,
but the shell's internal memory management strategy.]
More comments to better explain what is happening in some cases.
Also cleaned up some hopelessly broken DEBUG mode data that were
recently added (no effect on anyone but the poor semi-human attempting
to make sense of it...).
User visible changes:
Proper counting of line numbers when a here document is delimited
by a multi-line end-delimiter, as in
cat << 'REALLY
END'
here doc line 1
here doc line 2
REALLY
END
(which is an obscure case, but nothing says should not work.) The \n
in the end-delimiter of the here doc (the last one) was not incrementing
the line number, which from that point on in the script would be 1 too
low (or more, for end-delimiters with more than one \n in them.)
With tilde expansion:
unset HOME; echo ~
changed to return getpwuid(getuid())->pw_home instead of failing (returning ~)
POSIX says this is unspecified, which makes it difficult for a script to
compensate for being run without HOME set (as in env -i sh script), so
while not able to be used portably, this seems like a useful extension
(and is implemented the same way by some other shells).
Further, with
HOME=; printf %s ~
we now write nothing (which is required by POSIX - which requires ~ to
expand to the value of $HOME if it is set) previously if $HOME (in this
case) or a user's directory in the passwd file (for ~user) were a null
STRING, We failed the ~ expansion and left behind '~' or '~user'.
would have usually been set earlier, this change is mostly an effective
no-op, but it is better this way (just in case) - not observed to have
caused any problems.
the LINENO hack, and uses the LINENO var for both ${LINENO} and $((LINENO)).
(Code to invert the LINENO hack when required, like when de-compiling the
execution tree to provide the "jobs" command strings, is still included,
that can be deleted when the LINENO hack is completely removed - look for
refs to VSLINENO throughout the code. The var funclinno in parser.c can
also be removed, it is used only for the LINENO hack.)
This version produces accurate results: $((LINENO)) was made as accurate
as the LINENO hack made ${LINENO} which is very good. That's why the
LINENO hack is not yet completely removed, so it can be easily re-enabled.
If you can tell the difference when it is in use, or not in use, then
something has broken (or I managed to miss a case somewhere.)
The way that LINENO works is documented in its own (new) section in the
man page, so nothing more about that, or the new options, etc, here.
This version introduces the possibility of having a "reference" function
associated with a variable, which gets called whenever the value of the
variable is required (that's what implements LINENO). There is just
one function pointer however, so any particular variable gets at most
one of the set function (as used for PATH, etc) or the reference function.
The VFUNCREF bit in the var flags indicates which func the variable in
question uses (if any - the func ptr, as before, can be NULL).
I would not call the results of this perfect yet, but it is close.
negative of a negative number, just add a positive number instead...
(the previous version came about purely as an accident of the way the
relevant piece of code was added and debugged.... that's my story anyway!)
Fixing this fixes a regression introduced earlier today (UTC) where
arithmetic expressions would be split correctly when the arithmetic
started at the beginning of a word:
echo $(( expression ))
where "begin" is 0, and so (begin, length) is the same as (begin, begin+length)
(aka: (begin,end) - and yes, "end" means 1 after last to consider).
but did not work correctly when the usage was
echo XXX$( expression ))
(begin !+ 0) and would only split (some part of) the result of the expression.
This regression was also foung by the new t_fsplit:split_arith
test case added earlier to the ATF tests for sh.
differently...)
In particular ${01} is now $1 not $0 (for ${0any-digits})
${4294967297} is most probably now ""
(unless you have a very large number of params)
it is no longer an alias for $1 (4294967297 & 0xFFFFFFFF) == 1
$(( expr $(( more )) stuff )) is no longer the same as
$(( expr (( more )) stuff )) which was sometimes OK, as in:
$(( 3 + $(( 2 - 1 )) * 3 ))
but not always as in:
$(( 1$((1 + 1))1 ))
which should be 121, but was an arith syntax error as
1((1 + 1))1
is meaningless.
Probably some more. This also sprinkles a little const, splits a big
func that had 2 (kind of unrelated) purposes into two simpler ones,
and avoids some (semi-dubious) modifications (and restores) in the input
string to insert \0's when they were needed.
closing PR bin/50958
That meant adding the assignment operators ('=', and all of the +=, *= ...)
Currently, ++, --, and ',' are not implemented (none of those are required
by posix) but support for them (most likely ',' first) might be added later.
To do this, I removed the yacc/lex arithmetic parser completely, and
replaced it with a hand written recursive descent parser, that I obtained
from FreeBSD, who earlier had obtained it from dash (Herbert Xu).
While doing the import, I cleaned up the sources (changed some file names
to avoid requiring a clean build, or signifigant surgery to the obj
directories if "build.sh -u" was to be used - "build.sh -u" should work
fine as it is now) removed some dashisms, applied some KNF, ...
variable (with its current value set at 20160401) as discussed on
current-users and tech-userlevel. This also includes the necessary
support to implement it properly (particularly the unexportable
part) and adds options to the export command to support unexportable
variables. Also implement the "posix" option (no single letter
equivalent) which gets its default value from whether or not
POSIXLY_CORRECT is set in the environment when the shell starts
(but can be changed just like any other option using -o and +o on
the command line, or the set builtin command.) While there, fix
all uses of options so it is possible to have options that have a
short (one char) name, and no long name, just as it has been possible
to have options with a long name and no short name, though there
are currently none (with no long name). For now, the only use of
the posix option is to control whether ${ENV} is read at startup
by a non-interactive shell, so changing it with set is not usful
- that might change in the future. (from kre@)
a suggestion from him, the way the fix to PR bin/50993 was implemented
has changed a little. There are three steps involved in processing
a here document, reading it, parsing it, and then evaluating it
before applying it to the correct file descriptor for the command
to use. The third of those is not related to this problem, and
has not changed. The bug was caused by combining the first two
steps into one (and not doing it correctly - which would be hard
that way.) The fix is to split the first two stages into separate
events. The original fix moved the 2nd stage (parsing) to just
immediately before the 3rd stage (evaluation.) Jilles pointed out
some unwanted side effects from doing it that way, and suggested
moving the 2nd stage to immediately after the first. This commit
makes that change. The effect is to revert the changes to expand.c
and parser.h (which are no longer needed) and simplify slightly
the change to parser.c. (from kre@)
documents are processed. Now, when first detected, they are
simply read (the only change made to the text is to join lines
ended with a \ to the subsequent line, otherwise end marker detection
does not work correctly (for here docs with an unquoted endmarker
only of course.) This patch also moves the "internal subroutine"
for looking for the end marker out of readtoken1() (which had to
happen as readtoken1 is no longer reading the here doc when it is
needed) - that uses code mostly taken from FreeBSD's sh (thanks!)
and along the way results in some restrictions on what the end
marker can be being removed. We still do not allow all we should.
(from kre@)
magic string " \t\n" all over the place, slightly improved
syntax error messages, restructured some of the code for
clarity, don't allow IFS to be imported through the environment,
and remove the (never) conditionally compiled ATTY option.
Apart from one or two syntax error messages, and ignoring IFS
if present in the environment, this is intended to have no
user visible changes. (from kre@)