after already writing the prompt (set with the -p option).
That results in nonsense like:
$ read -p foo
fooread: arg count
While here, improve the error message so it means something.
Now we will get:
$ read -p foo
read: variable name required
Usage: read [-r] [-p prompt] var...
[Detected by code reading while doing the work for the previous fix]
In 1.35 (March 2005) (the big read fixup), most escape handling and IFS
processing in the read builtin was corrected. However 2 cases were missed,
one is a word (something to be assigned to any variable but the last) in
which every character is escaped (the code was relying on a non-escaped char
to set the "in a word" status), and second trailing IFS whitespace at
the end of the line was being deleted, even if the chars had been escaped
(the escape chars are no longer present).
See the PR for more details (including the case that detected the problem).
After fixing this, I looked at the FreeBSD code (normally might do it
before, but these fixes were trivial) to check their implementation.
Their code does similar things to ours now does, but in a completely
different way, their read builtin is more complex than ours needs to
be (they handle more options). For anyone tempted to simply incorporate
their code, note that it relies upon infrastructure changes elsewhere
in the shell, so would not be a simple cut and drop in exercise.
This needs pullups to -3 -4 -5 -6 -7 -8 and -9 (fortunately this is
happening before -10 is branched, so will never be broken this way there).
-b (from FreeBSD) - set blocksize to blocks (512 bytes)
(overrides a contrary setting in BLOCKSIZE)
-H (from FreeBSD and Linux): -h using SI units (powers of 10). Ugh.
-N suppress the header line (except with -P which requires it).
-f show only free space (or inodes) in a minimal format (implies -N)
(that is, with one file[system] specified, print 1 number only)
With -c, show only the total.
Intended to be useful for scripting (aka, I needed it.)
While here, improve the usage message (group options where they apply,
there is no reason, for example, that -g should be shown differently
to -k -m ..., and those options aren't at all useful with -G)
Update the man page to match.
as error (*). This occurs typically when signal is received.
(*) For older version, we already deal with short read(2) from remote
host in sink(). But for other cases, i.e., write(2) to local file in
sink(), read(2)/write(2) in source(), error was raised.
This version of rcp(1) can successfully send/receive files with older
version, even if short read(2)/write(2) occurs by SIGINFO.
Also, when real error occurs, give up immediately instead of continue to
send/receive wrong data.
Clean up the mess a little bit as well...
ksh -c '(i=10; echo $((++-+++i)))'
reported by Steffen Nurpmeso (not on a NetBSD list or PR).
Seems pointless to fix just one of the bugs in this thing, but this one was
easy enough (and stupid enough). (The "i=10" part is unimportant, as is the sub-shell).
(the job number, given jp a pointer to a jobs table entry)
used open coded previously in many places (mostly in DEBUG mode
trace messages, so not included in most shells, but there are
a few others).
Make the type of JNUM() be int rather than the ptrdiff_t the
open coded version became ... which when used in some printf()
type function arg list was cast to some other arbitrary (but not
consistent) int type for which there is a standard %Xd type
format conversion. Now we can (and do) just use %d for this.
If the number of jobs ever exceeds the range of an int, we would
have far more serious problems than the broken output this would
cause.
While here improve a comment or two, and use JOBRUNNING instead
of 0 where the intent is the former (JOBRUNNING is #defined as 0).
NFCI.
is essentially the same) arg string is generated, to lessen the chances
that the table of limits, and the arg string that allows limits to be
reported or set will get out of sync. They weren't (as long as we didn't
grow an RLIMIT_SWAP) this is just tidier.
While here, reorder the limits table fields, and shrink a couple that
were needlessly wasteful, to save some space -- for most architectures
this should save 8 bytes per table entry (there are currently 13).
(Some minor code bloat offsets this slightly because of int type
promotions now required).
NFCI.
particularly perverse way, the error message for a bad octal
constant as the new umask value could incorrectly claim that the
-S option (which would need to be present to cause this issue)
was the detected bad value. Fix that to report the actual
incorrect arg.
And while fiddling, also check for args to umask that are too big
to be sane mask values (the biggest permitted is 07777) and use
mode_t as the mask variable type, rather than int.
make the -e option to "fc" fail to work (the commit message was about some
other changes entirely, so I an only assume this was committed by mistake).
It says a lot about the use of the fc command that no-one noticed that
this did not work properly for all this time.
Internally in sh, it is possible for built in commands to use either
getopt(3) (from libc) or the much simpler internal shell nextopt() routine
for option (flag) parsing. However it makes no sense to use getopt()
and then access a global variable set only by nextopt() instead of the
one getopt() sets (which is what the code had used previously, forever).
Use the correct variable again.
XXX pullup -9 -8 (-7 -6 -5 ...)
matter what $0 is (or is not) set to. This means that editrc(5)
lines that start "sh:" are used (in addition to those with no prefix,
which will usually be most of them), regardless of the name or manner in
which we were invoked.
OK christos@
getenv()/setenv()/unsetenv() which manipulate the envornoment
the shell was passed at entry.
These are a little odd in sh as that environment is copied into
the shell's internal variable data struct at shell startup, and
normally never accessed after that - in builtin commands (test.
printf, ...) getenv() is #defined to become an internal sh lookup
function instead, so even those never use the startup environment).
NFCI
standardised the table from V7. Nobody, including the original authors,
seems to have noticed this. Merge them and update the documentation.
Also fix the odd, inconsistent, spelling of "pre-4.3BSD-Reno").
(From nabijaczleweli)
getopts has different behaviour if the leading character
of optstring is `:', so describe in more detail:
- no errors are printed (already there)
- unknown options set var to `?' and OPTARG to the unknown option
- missing arguments set var to `:' and OPTARG to the option name
Slight rewording of other paragraphs for more clarity.
that is to be referenced after a return from setjmp() via longjmp().
This doesn't ever seem to have caused a problem, but I think using
volative vars is required here.
For reasons I never bothered to discover, even though this change
certainly requires a store into stack memory which wasn't required
before, earlier measurements showed the shell getting (slightly) smaller
with this change in place.
NFCI
Here we go again... One more time to redo how here docs are
processed (it has been a few years since the last time!)
This is actually a relatively minor change, mostly to timimg
(to just when things happen). Now here docs are expanded at the
same time the "filename" word in a redirect is expanded, rather than
later when the heredoc was being sent to its process. This actually
makes things more consistent - but does break one of the ATF tests
which was testing that we were (effectively) internally inconsistent
in this area.
Not all shells agree on the context in which redirection expansions
should happen, some make any side effects visible to the parent shell
(the majority do) others do the redirection expansions in a subshell
so any side effcts are lost. We used to have a foot in each camp,
with the majority for everything but here docs, and the minority for
here docs. Now we're all the way with LBJ ... (or something like that).
all the sh options, also used with "set", are listed) in response to
a discussion on icb conveyed to me by Darrin B. Jewell.
A few improvements to the description of the "set" built-in as well.
Bump Dd to cover all of this month's changes (so far).
Make "hash" exit(!=0) (ie: exit(1)) if it writes an error message to
stderr as required by POSIX (it was writing "not found" errors, yet
still doing exit(0)).
Whether, when doing "hash foobar", and "foobar" is not found as a command
(not a built-in, not a function, and not found via a PATH search), that
should be considered an error differs between shells. All of the ksh
descendant shells say "no", write no error message in this case, and
exit(0) if no other errors occur. Other shells (essentially all) do
consider it an error, write a message to stderr, and exit(1) when this happens.
POSIX isn't clear, the bug report:
https://austingroupbugs.net/view.php?id=1460
which is not yet resolved, suggests that the outcome will be that
this is to be unspecified. Given the diversity, there might be no
other choice.
Have a foot in both camps - default to the "other shell" behaviour,
but add a -e option (no errors ... applies only to these "not found"
errors) to generate the ksh behaviour. Without other errors (like an
unknown option, etc) "hash -e anyname" will always exit(0).
See the PR for details on how it all works now, or read the updated man page.
While here, when hash is in its other mode (reporting what is in the
table) check for I/O errors on stdout, and exit(1) (with an error
message!) if any occurred. This does not apply to output generated
by the -v option when command names are given (that output is incidental).
In sh.1 document all of this. Also add documentation for a bunch of
other options the hash command has had for years, but which were never
documented. And while there, clean up some other sections I noticed
needed improving (either formatting or content or both).
This affects (as best I can tell) only uses of ${LINENO} in PS4
when -x is enabled (and perhaps only when the list contains no
expansions). "for" like "case" (which was already handled) is
special in that it generates trace output before actually executing
any kind of simple command.
identified with the -u flag (that is, I hope I identified all
the ones that were missing it).
This change is a no-op (NFC) as the -u flag does nothing.
Still, just in case we find a use for it one day, and just as a
matter of general principle, we should get this correct.
echo.c: In function 'main':
echo.c:74:1: warning: control reaches end of non-void function
This raises 2 issues.
First, why with WARNS set to 6, which should include just about
everything, was this not causing problems with everyday builds?
Surely falling off the end of a non-void function without returning
a specific value is one of the more basic errors that should be fixed.
(Whatever the name of the function). Is there a missing -Wxxx option?
And second, does C99 really promise:
Remove unnecessary call to exit(0); returning from main is equivalent
since C99.
in the sense that simply falling out of main() is exit(0)? Or is it
simply saying that the return value of main() is the exit status (which
has been true for much longer than since c99)?
Mostly adding DEBUG mode tracing (when appropriate verbose tracing
is enabled generally) whenever a shell (including sushell) process
exits, so shells that the tracing should indicate why ehslls that
vanish did that.
Note for future investigators: if the relevant tracing is enabled,
and a (sub-)shell still simply seems to have vanished without trace,
the likely cause is that it was killed by a signal - and of those,
the most common that occurs is SIGPIPE.
Be explicit about what happens to PWD after a successful cd command.
Also be very clear that "cd" and "cd -P" are the same thing, and
the only cd variant implemented.
Also, when it is appropriate to print the new directory after a cd
command, note that it happens if interactive (as it always has here)
and also if the posix option is set (for POSIX compat, where "interactive"
is irrelevant). Mention that "cd -" is a case where the new directory
is printed (along with paths relative to a non-empty CDPATH entry,
and where the "cd old new" (string replacement in curdir) is used.
While here document the new -e option to cd.
XXX pullup -9
In the pwd builtin, verify that curdir names '.' before
simply printing it. Never alter PWD or OLDPWD in the
pwd command.
Also while here, implement the (new: coming in POSIX, but has existed
for a while in several other shells) -e option to cd (with -e, cd -P
will exit(1) if the chdir() succeeds, but PWD cannot be discovered).
cd now prints the directory name used (if different from that given,
or cdprint is on) if interactive or (the new bit)in posix mode.
Some additional/changed comments added, and a DEBUG mode trace call
that was accidentally put inside an #if 0 block moved to where it
can do some good.
XXX pullup -9
for a function with unknown number & types of args, the compiler isn't
able to automatically convert to the correct type. Issue pointed out
in off list e-mail by Rolland Illig ... Thanks.
The first arg (pointer to where to put length of result) is of a known
type, so doesn't have the same issue - we can keep using NULL for that
one when the length isn't needed.
Also, make sure to return a correctly null terminated null string in
the (absurd) case that there are no non-null args to strstrcat() (though
there are much better ways to generate "" on the stack). Since there is
currently just one call in the code, and it has real string args, this
isn't an issue for now, but who knows, some day.
NFCI - if there is any real change, then it is a change that is required.
XXX pullup -9 (together with the previous changes)
After almost 30 years, finally do the right thing and read $HOME/.profile
rather than .profile in the initial directory (it was that way in version
1.1 ...) All other ash descendants seem to have fixed this long ago.
While here, copy a feature from FreeBSD which allows "set +p" (if a
shell run by a setuid process with the -p flag is privileged) to reset
the privileges. Once done (the set +p) it cannot be undone (a later
set -p sets the 'p' flag, but that's all it does) - that just becomes a
one bit storage location.
We do this, as (also copying from FreeBSD, and because it is the right
thing to do) we don't run .profile in a privileged shell - FreeBSD run
/etc/suid_profile in that case (not a good name, it also applies to setgid
shells) but I see no real need for that, we run /etc/profile in any case,
anything that would go in /etc/suid_profile can just go in /etc/profile
instead (with suitable guards so the commands only run in priv'd shells).
One or two minor DEBUG mode changes (notably having priv'd shells identify
themselves in the DEBUG trace) and sh.1 changes with doc of the "set +p"
change, the effect that has on $PSc and a few other wording tweaks.
XXX pullup -9 (not -8, this isn't worth it for the short lifetime
that has left - if it took 28+ years for anyone to notice this, it
cannot be having all that much effect).
Lint can handle __COPYRIGHT and __RCSID, so there is no need to hide
them anymore.
Use proper type 'bool' for nflag, ensure correct types via lint's strict
bool mode.
Remove unnecessary call to exit(0); returning from main is equivalent
since C99.
No functional change.
28 years was more than enough for the useless 'continue' statement in
this do-while-0 "loop". Without the 'continue' statement, there is no
need for the "loop" anymore. The comment at its top was confusing since
the word 'while' suggested a loop, but there was none, so remove that as
well.
Pointed out by Tom Ivar Helbekkmo on source-changes-d.
No change to the resulting binary.
Lint complained about the do-while-0 loop that contained a continue. It
didn't state the reason for it, but indeed the code looked complicated.
Rewrite the code to be less verbose and to use common coding patterns.
No functional change.
exec.c(575): error: continue in 'do ... while (0)' loop [323]
jobs.c(203): error: continue in 'do ... while (0)' loop [323]
It is certainly a rarely used feature, I saw it the first time today and
had to look up its meaning in the C standard. But after that, I don't
see why a 'continue' statement in a 'do while' loop should be an error.
Maybe a warning since up to now I thought that 'continue' would jump
back to the top of the loop, while it really jumps to the bottom of the
loop body, for all 3 kinds of loops.
No change to the resulting binary. The 'return' statements are necessary
for GCC to generate the exact same object code, even though they can be
removed without affecting the functionality, as seen before the 'else'.
a way, but made more serious with the recent changes).
The n>&n operation (more or less a no-op, except it clears CLOEXEC)
should precede almost everything else - and simply be made to fail if
an attempt is made to apply it to a sh internal fd.
We were renumbering the internal fd (the n> part considered first)
which was dumb, but OK, before, but now rejecting the operation
(the >&n) part when n should not be visible to the script. That
made something of a mess (and could lead to the shell believing its
job control tty was at a fd it never got moved to).
Do things in the correct order, and simply fail that case for internal
fds (for every other n>xxx for any xxx sh simply renumbers its internal fd
n to some other fd before attempting the operation, even n>&- ... those are
all fine).
[In all the above the '>' is used in place of any redirect operator].
and keep the rest of the shell aware of any changes.
While here, modify 'ulimit -aSH' to print both the soft and hard limits
for the resources, rather than just (in this case, as H comes last) the
hard limit. In any other case when both S and H are present, and we're
examining a limit, use the soft limit (just as if neither were given).
No change for setting limits (both are set, unless exactly one of -H
or -S is given). However, we now check for overflow when converting
the value to be assigned, rather than just truncating the value however
it happens to work out...
be available ("13") issue reported by Jan Schaumann on netbsd-users.
This fixes a bug in the earlier fix (a day or so ago) which could allow the
shell's idea of which fd range was in use by the script to get wildly
incorrect, but now also actually looks to see which fd's are in use as
renamed other user fd's during the lifetime of a redirection which needs
to be able to be undone (most redirections occur after a fork and are
permanent in the child process). Attempting to access such a fd (as with
attempts to access a fd in use by the shell for its own purposes) is treated
as an attempt to access a closed fd (EBADF). Attempting to reuse the fd
for some other purpose (which should be rare, even for scripts attempting
to cause problems, since the shell generally knows which fds the script
wants to use, and avoids them) will cause the renamed (renumbered) fd
to be renamed again (moved aside to some other available fd), just as
happens with the shell's private fds.
Also, when a generic fd is required, don't give up because of EMFILE
or similar unless there are no available fds at all (we might prefer >10
or bigger, but if there are none there, use anything). This avoids
redirection errors when ulimit -n has been set small, and all the fds >10
that are available have been used, but we need somewhere to park the old
user of a fd while we reuse that fd for the redirection.
by the shell were available for manipulation by scripts (or the user).
These issues were reported by Jan Schaumann on netbsd-users.
The first allows the user to reference sh internal fds, and is
a simple fix - any sh internal fd is simply treated as if it were closed
when referenced by the script. These fds can be discovered by
examining /proc/N/fd so it is not difficult for a script to discover
which fd it should attempt to access.
The second allows the user to reference a user level fd which is
one that is normally available to it, but at a point where it should
no longer be visible (when that fd has been redirected, for a built
in command, so the original fd needs to be saved so it can be restored,
the saving fd should not be accessible). It is not as easy for the
script to determine which fd to attempt here, as the relevant one
exists only during the lifetime of a built-in command (and similar),
but there are ways in some cases (aside from looking at /proc from
another process).
Fix this one by watching which fds the user script is attempting
to use, and avoid using those as temporary fds. This is possible in
this case as we know what command is being run, before we need to
save the fds it uses. That's different from the earlier case where
when the shell allocates its fds we have no idea what it might
reference later.
Also clean up a couple of other minor code issues (NFC intended) that
I noticed while here...
commands with multiple synopsis lines (eg: trap).
But there really must be a better way to achieve this effect than
the way it is accomplished here, and I'm hoping some wizard who
understands mdoc much better than I do will revert this change and
do it using some inspired magic incantation instead.
do setproctitle(NULL) (which is not the same thing at all). Do the
same with jobs -Z '' as setting the title to "sh: " isn't useful.
Improve the way this is documented, and note that it is only done
this way because zsh did it first (ie: pass on the balme, doing this
in the jobs command is simply absurd.)