PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
Avoid allocating never-used entries in stmtCacheEntries[], other than the
intentionally-unused zero'th entry. Tie the array size directly to the
bucket count and size, rather than having undocumented dependencies between
three magic constants. Fix the hash calculation to be platform-independent
--- notably, it was sensitive to the signed'ness of "char" before, not to
mention having an unnecessary hard-wired dependency on the existence and
size of type "long long". (The lack of complaints says it's been a long
time since anybody tried to build PG on a compiler without "long long",
and certainly with the requirement for C99 this isn't a live bug anymore.
But it's still not per project coding style.) Fix ecpg_auto_prepare's
new-cache-entry path so that it increments the exec count for the new
cache entry not the dummy zero'th entry.
The last of those is an actual bug, though one of little consequence;
the rest is mostly future-proofing and neatnik-ism. Doesn't seem
necessary to back-patch.
This removes a megabyte of storage that isn't used at all in ecpglib's
default operating mode --- you have to enable auto-prepare to get any
use out of it. Seems well worth the trouble to allocate on demand.
Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
Looking at this code made my head hurt. Format the comments more
like the way it's done elsewhere, break a few overly long lines.
No actual code changes in this commit.
Removing the separate Windows expected-files in commit f1885386f
turns out to have been too optimistic: on most (but not all!) of our
Windows buildfarm members, the tests still print floats with three
exponent digits, because they're invoking the native printf()
not snprintf.c.
But rather than put back the extra expected-files, let's hack
the three tests in question so that they adjust float formatting
the same way snprintf.c does.
Discussion: https://postgr.es/m/18890.1539374107@sss.pgh.pa.us
Windows, alone among our supported platforms, likes to emit three-digit
exponent fields even when two digits would do. Adjust such results to
look like the way everyone else does it. Eliminate a bunch of variant
expected-output files that were needed only because of this quirk.
Discussion: https://postgr.es/m/2934.1539122454@sss.pgh.pa.us
Build a third version of libpgcommon.a, with -fPIC and -DFRONTEND,
as commit ea53100d5 did for src/port. Use that in libpq to avoid
symlinking+rebuilding source files retail.
Also adjust ecpg to use the new src/port and src/common libraries.
Arrange to install these libraries, too, to simplify out-of-tree
builds of shared libraries that need any of these modules.
Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
Client applications should get this function, if they need it, from
libpgport.
The fact that it's exported from libpq is a hack left over from before
we set up libpgport. It's never been documented, and there's no good
reason for non-PG code to be calling it anyway, so hopefully this won't
cause any problems. Moreover, with the previous setup it was not real
clear whether our clients that use the function were getting it from
libpgport or libpq, so this might actually prevent problems.
The reason for changing it now is that in the wake of commit ea53100d5,
some linkers won't export the symbol, apparently because it's coming from
a .a library instead of a .o file. We could get around that by continuing
to symlink pqsignal.c into libpq as before; but unless somebody complains
very hard, I don't want to adopt such a kluge.
Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
libpq and ecpg need shared-library-friendly versions of assorted src/port/
and src/common/ modules. Up to now, they got those by symlinking the
individual source files and compiling them locally. That's baroque, and a
pain to maintain, and it results in some amount of duplicated compile work.
It might've made sense when only a couple of files were needed, but the
list has grown and grown and grown :-(
It makes more sense to have the originating directory build a third variant
of libpgport.a/libpgcommon.a containing modules built with $(CFLAGS_SL),
and just link that into the shared library. Unused files won't get linked,
so the end result should be the same.
This patch makes a down payment on that idea by having src/port/ build
such a library and making libpq use it. If the buildfarm doesn't expose
fatal problems with the approach, I'll extend it to the other cases.
Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
strerror.c now requires strlcpy() in some cases, and a couple of the
ecpg libraries did not have that at hand. Pull it in from src/port/
following the usual recipe. Per buildfarm.
I started out with the idea that we needed to detect use of %m format specs
in contexts other than elog/ereport calls, because we couldn't rely on that
working in *printf calls. But a better answer is to fix things so that it
does work. Now that we're using snprintf.c all the time, we can implement
%m in that and we've fixed the problem.
This requires also adjusting our various printf-wrapping functions so that
they ensure "errno" is preserved when they call snprintf.c.
Remove elog.c's handmade implementation of %m, and let it rely on
snprintf to support the feature. That should provide some performance
gain, though I've not attempted to measure it.
There are a lot of places where we could now simplify 'printf("%s",
strerror(errno))' into 'printf("%m")', but I'm not in any big hurry
to make that happen.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
We've spent an awful lot of effort over the years in coping with
platform-specific vagaries of the *printf family of functions. Let's just
forget all that mess and standardize on always using src/port/snprintf.c.
This gets rid of a lot of configure logic, and it will allow a saner
approach to dealing with %m (though actually changing that is left for
a follow-on patch).
Preliminary performance testing suggests that as it stands, snprintf.c is
faster than the native printf functions for some tasks on some platforms,
and slower for other cases. A pending patch will improve that, though
cases with floating-point conversions will doubtless remain slower unless
we want to put a *lot* of effort into that. Still, we've not observed
that *printf is really a performance bottleneck for most workloads, so
I doubt this matters much.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
This provides the features that used to exist in useful_strerror()
for users of strerror_r(), too. Also, standardize on the GNU convention
that strerror_r returns a char pointer that may not be NULL.
I notice that libpq's win32.c contains a variant version of strerror_r
that probably ought to be folded into strerror.c. But lacking a
Windows environment, I should leave that to somebody else.
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
elog.c has long had a private strerror wrapper that handles assorted
possible failures or deficiencies of the platform's strerror. On Windows,
it also knows how to translate Winsock error codes, which the native
strerror does not. Move all this code into src/port/strerror.c and
define strerror() as a macro that invokes it, so that both our frontend
and backend code will have all of this behavior.
I believe this constitutes an actual bug fix on Windows, since AFAICS
our frontend code did not report Winsock error codes properly before this.
However, the main point is to lay the groundwork for implementing %m
in src/port/snprintf.c: the behavior we want %m to have is this one,
not the native strerror's.
Note that this throws away the prior use of src/port/strerror.c,
which was to implement strerror() on platforms lacking it. That's
been dead code for nigh twenty years now, since strerror() was
already required by C89.
We should likewise cause strerror_r to use this behavior, but
I'll tackle that separately.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
When we removed the ecpg-specific versions, we also removed the
"(PostgreSQL)" from the --version output, which we show in other
programs.
Reported-by: Ioseph Kim <pgsql-kr@postgresql.kr>
The following parameters have been parsed in lossy ways when specified
in a connection string processed by libpq:
- connect_timeout
- keepalives
- keepalives_count
- keepalives_idle
- keepalives_interval
- port
Overflowing values or the presence of incorrect characters were not
properly checked, leading to libpq trying to use such values and fail
with unhelpful error messages. This commit hardens the parsing of those
parameters so as it is possible to find easily incorrect values.
Author: Fabien Coelho
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808171206180.20841@lancre
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib. This is definitely
not what we want, because the frontend versions often behave slightly
differently. Up to now it's been "slight" enough that nobody noticed;
but with the addition of SCRAM support functions in src/common, we're
observing crashes due to the difference between palloc and malloc
memory allocation rules, as reported in bug #15367 from Jeremy Evans.
The purpose of this patch is to create a direct test for this type of
mis-linking, so that we know whether any given platform requires extra
measures to prevent using the wrong functions. If the test fails, it
will lead to connection failures in the contrib/postgres_fdw regression
test. At the moment, *BSD platforms using ELF format are known to have
the problem and can be expected to fail; but we need to know whether
anything else does, and we need a reliable ongoing check for future
platforms.
Actually fixing the problem will be the subject of later commit(s).
Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
Ensure that pg_saslprep() initializes its output argument to NULL in
all failure paths, and then remove the redundant initialization that
some (not all) of its callers did. This does not fix any live bug,
but it reduces the odds of future bugs of omission.
Also add a comment about why the existing failure-path coding is
adequate.
Back-patch so as to keep the function's API consistent across branches,
again to forestall future bug introduction.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/16558.1536407783@sss.pgh.pa.us
libpq connection options as returned by PQconndefaults() have a
"dispchar" field that determines (among other things) whether an option
is a "debug" option, which shouldn't be shown by default to clients.
postgres_fdw makes use of that to control which connection options to
accept from a foreign server configuration.
Curiously, the "options" option, which allows passing configuration
settings to the backend server, was listed as a debug option, which
prevented it from being used by postgres_fdw. Maybe it was once meant
for debugging, but it's clearly in general use nowadays.
So change the dispchar for it to be the normal non-debug case. Also
remove the "debug" reference from its label field.
Reported-by: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.
The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:
1. If immediately followed by a comment or +-, >= <= <> would be given
the old precedence of Op rather than the correct new precedence;
2. If followed by a comment, != would be returned as Op rather than as
NOT_EQUAL, causing it not to be found at all;
3. If followed by a comment or +-, the => token for named arguments
would be lexed as Op, causing the argument to be mis-parsed as a
simple expression, usually causing an error.
Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.
Backpatch to 9.5 where the problem was introduced.
Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
The lexer's handling of operators contained an O(N^3) hazard when
dealing with long strings of + or - characters; it seems hard to
prevent this case from being O(N^2), but the additional N multiplier
was not needed.
Backpatch all the way since this has been there since 7.x, and it
presents at least a mild hazard in that trying to do Bind, PREPARE or
EXPLAIN on a hostile query could take excessive time (without
honouring cancels or timeouts) even if the query was never executed.
Historically, we looked up the target hostname in connectDBStart, so that
PQconnectPoll did not need to do DNS name resolution. The patches that
added multiple-target-host support to libpq preserved this division of
labor; but it's really nonsensical now, because it means that if any one
of the target hosts fails to resolve in DNS, the connection fails. That
negates the no-single-point-of-failure goal of the feature. Additionally,
DNS lookups aren't exactly cheap, but the code did them all even if the
first connection attempt succeeds.
Hence, rearrange so that PQconnectPoll does the lookups, and only looks
up a hostname when it's time to try that host. This does mean that
PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid
that, you should be using hostaddr, as the documentation has always
specified. It seems fairly unlikely that any applications would really
care whether the lookup occurs inside PQconnectStart or PQconnectPoll.
In addition to calling out that fact explicitly, do some other minor
wordsmithing in the docs around the multiple-target-host feature.
Since this seems like a bug in the multiple-target-host feature,
backpatch to v10 where that was introduced. In the back branches,
avoid moving any existing fields of struct pg_conn, just in case
any third-party code is looking into that struct.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
Since our substitute snprintf now returns a C99-compliant result,
there's no need anymore to have complicated code to cope with pre-C99
behavior. We can just make configure substitute snprintf.c if it finds
that the system snprintf() is pre-C99. (Note: I do not believe that
there are any platforms where this test will trigger that weren't
already being rejected due to our other C99-ish feature requirements for
snprintf. But let's add the check for paranoia's sake.) Then, simplify
the call sites that had logic to cope with the pre-C99 definition.
I also dropped some stuff that was being paranoid about the possibility
of snprintf overrunning the given buffer. The only reports we've ever
heard of that being a problem were for Solaris 7, which is long dead,
and we've sure not heard any reports of these assertions triggering in
a long time. So let's drop that complexity too.
Likewise, drop some code that wasn't trusting snprintf to set errno
when it returns -1. That would be not-per-spec, and again there's
no real reason to believe it is a live issue, especially not for
snprintfs that pass all of configure's feature checks.
Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
Fix a small number of places that were testing the result of snprintf()
but doing so incorrectly. The right test for buffer overrun, per C99,
is "result >= bufsize" not "result > bufsize". Some places were also
checking for failure with "result == -1", but the standard only says
that a negative value is delivered on failure.
(Note that this only makes these places correct if snprintf() delivers
C99-compliant results. But at least now these places are consistent
with all the other places where we assume that.)
Also, make psql_start_test() and isolation_start_test() check for
buffer overrun while constructing their shell commands. There seems
like a higher risk of overrun, with more severe consequences, here
than there is for the individual file paths that are made elsewhere
in the same functions, so this seemed like a worthwhile change.
Also fix guc.c's do_serialize() to initialize errno = 0 before
calling vsnprintf. In principle, this should be unnecessary because
vsnprintf should have set errno if it returns a failure indication ...
but the other two places this coding pattern is cribbed from don't
assume that, so let's be consistent.
These errors are all very old, so back-patch as appropriate. I think
that only the shell command overrun cases are even theoretically
reachable in practice, but there's not much point in erroneous error
checks.
Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
Commit 5f374fe7a attempted to turn the connect_timeout from an overall
maximum time limit into a per-host limit, but it didn't do a great job of
that. The timer would only get restarted if we actually detected timeout
within connectDBComplete(), not if we changed our attention to a new host
for some other reason. In that case the old timeout continued to run,
possibly causing a premature timeout failure for the new host.
Fix that, and also tweak the logic so that if we do get a timeout,
we advance to the next available IP address, not to the next host name.
There doesn't seem to be a good reason to assume that all the IP
addresses supplied for a given host name will necessarily fail the
same way as the current one. Moreover, this conforms better to the
admittedly-vague documentation statement that the timeout is "per
connection attempt". I changed that to "per host name or IP address"
to be clearer. (Note that reconnections to the same server, such as for
switching protocol version or SSL status, don't get their own separate
timeout; that was true before and remains so.)
Also clarify documentation about the interpretation of connect_timeout
values less than 2.
This seems like a bug, so back-patch to v10 where this logic came in.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/5735.1533828184@sss.pgh.pa.us
GNUmakefile.in defined a macro "garbage" that seems to have been meant
as a suitable target for automatic "rm -rf" treatment, but it isn't
actually used anywhere (and indeed never was, AFAICT).
Moreover, we have concluded that the Makefiles shouldn't take it upon
themselves to remove files that aren't expected by-products of building,
so that doing anything like that would be against project policy anyway.
Hence, just remove the macro.
Grepping around finds another violation of that policy in ecpg/preproc,
so clean that up too.
Daniel Gustafsson (ecpg change by me)
Discussion: https://postgr.es/m/AFBEF63E-E19D-4EBB-9F08-4617CDC751ED@yesql.se
The logic in PQconnectPoll() did not take care to ensure that all of
a PGconn's internal state variables were reset before trying a new
connection attempt. If we got far enough in the connection sequence
to have changed any of these variables, and then decided to try a new
server address or server name, the new connection might be completed
with some state that really only applied to the failed connection.
While this has assorted bad consequences, the only one that is clearly
a security issue is that password_needed didn't get reset, so that
if the first server asked for a password and the second didn't,
PQconnectionUsedPassword() would return an incorrect result. This
could be leveraged by unprivileged users of dblink or postgres_fdw
to allow them to use server-side login credentials that they should
not be able to use.
Other notable problems include the possibility of forcing a v2-protocol
connection to a server capable of supporting v3, or overriding
"sslmode=prefer" to cause a non-encrypted connection to a server that
would have accepted an encrypted one. Those are certainly bugs but
it's harder to paint them as security problems in themselves. However,
forcing a v2-protocol connection could result in libpq having a wrong
idea of the server's standard_conforming_strings setting, which opens
the door to SQL-injection attacks. The extent to which that's actually
a problem, given the prerequisite that the attacker needs control of
the client's connection parameters, is unclear.
These problems have existed for a long time, but became more easily
exploitable in v10, both because it introduced easy ways to force libpq
to abandon a connection attempt at a late stage and then try another one
(rather than just giving up), and because it provided an easy way to
specify multiple target hosts.
Fix by rearranging PQconnectPoll's state machine to provide centralized
places to reset state properly when moving to a new target host or when
dropping and retrying a connection to the same host.
Tom Lane, reviewed by Noah Misch. Our thanks to Andrew Krasichkov
for finding and reporting the problem.
Security: CVE-2018-10915
There are some problems with the tls-unique channel binding type. It's not
supported by all SSL libraries, and strictly speaking it's not defined for
TLS 1.3 at all, even though at least in OpenSSL, the functions used for it
still seem to work with TLS 1.3 connections. And since we had no
mechanism to negotiate what channel binding type to use, there would be
awkward interoperability issues if a server only supported some channel
binding types. tls-server-end-point seems feasible to support with any SSL
library, so let's just stick to that.
This removes the scram_channel_binding libpq option altogether, since there
is now only one supported channel binding type.
This also removes all the channel binding tests from the SSL test suite.
They were really just testing the scram_channel_binding option, which
is now gone. Channel binding is used if both client and server support it,
so it is used in the existing tests. It would be good to have some tests
specifically for channel binding, to make sure it really is used, and the
different combinations of a client and a server that support or doesn't
support it. The current set of settings we have make it hard to write such
tests, but I did test those things manually, by disabling
HAVE_BE_TLS_GET_CERTIFICATE_HASH and/or
HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH.
I also removed the SCRAM_CHANNEL_BINDING_TLS_END_POINT constant. This is a
matter of taste, but IMO it's more readable to just use the
"tls-server-end-point" string.
Refactor the checks on whether the SSL library supports the functions
needed for tls-server-end-point channel binding. Now the server won't
advertise, and the client won't choose, the SCRAM-SHA-256-PLUS variant, if
compiled with an OpenSSL version too old to support it.
In the passing, add some sanity checks to check that the chosen SASL
mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, matches whether the SCRAM
exchange used channel binding or not. For example, if the client selects
the non-channel-binding variant SCRAM-SHA-256, but in the SCRAM message
uses channel binding anyway. It's harmless from a security point of view,
I believe, and I'm not sure if there are some other conditions that would
cause the connection to fail, but it seems better to be strict about these
things and check explicitly.
Discussion: https://www.postgresql.org/message-id/ec787074-2305-c6f4-86aa-6902f98485a4%40iki.fi
Commit 1944cdc98 changed PQhost() to return the hostaddr value when that
is specified and host isn't. This is a good idea in general, but
fe-auth.c and related files contain PQhost() calls for which it isn't.
Specifically, when we compare SSL certificates or other server identity
information to the host field, we do not want to use hostaddr instead;
that's not what's documented, that's not what happened pre-v10, and
it doesn't seem like a good idea.
Instead, we can just look at connhost[].host directly. This does what
we want in v10 and up; in particular, if neither host nor hostaddr
were given, the host field will be replaced with the default host name.
That seems useful, and it's likely the reason that these places were
coded to call PQhost() originally (since pre-v10, the stored field was
not replaced with the default).
Back-patch to v10, as 1944cdc98 (just) was.
Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
Before v10, we always searched ~/.pgpass using the host parameter,
and nothing else, to match to the "hostname" field of ~/.pgpass.
(However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
"localhost".) In v10, this got broken by commit 274bb2b38, repaired by
commit bdac9836d, and broken again by commit 7b02ba62e; in the code
actually shipped, we'd search with hostaddr if both that and host were
specified --- though oddly, *not* if only hostaddr were specified.
Since this is directly contrary to the documentation, and not
backwards-compatible, it's clearly a bug.
However, the change wasn't totally without justification, even though it
wasn't done quite right, because the pre-v10 behavior has arguably been
buggy since we added hostaddr. If hostaddr is specified and host isn't,
the pre-v10 code will search ~/.pgpass for "localhost", and ship that
password off to a server that most likely isn't local at all. That's
unhelpful at best, and could be a security breach at worst.
Therefore, rather than just revert to that old behavior, let's define
the behavior as "search with host if provided, else with hostaddr if
provided, else search for localhost". (As before, a host name matching
DEFAULT_PGSOCKET_DIR is replaced by localhost.) This matches the
behavior of the actual connection code, so that we don't pick up an
inappropriate password; and it allows useful searches to happen when
only hostaddr is given.
While we're messing around here, ensure that empty elements within a
host or hostaddr list select the same behavior as a totally-empty
field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
if DEFAULT_PGSOCKET_DIR is /tmp. Things worked that way in some cases
already, but not consistently so, which contributed to the confusion
about what key ~/.pgpass would get searched with.
Update documentation accordingly, and also clarify some nearby text.
Back-patch to v10 where the host/hostaddr list functionality was
introduced.
Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.
Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
We include <float.h> in every place that needs isnan(), because MSVC
used to require it. However, since MSVC 2013 that's no longer necessary
(cf. commit cec8394b5ccd), so we can retire the inclusion to a
version-specific stanza in win32_port.h, where it doesn't need to
pollute random .c files. The header is of course still needed in a few
places for other reasons.
I (Álvaro) removed float.h from a few more files than in Emre's original
patch. This doesn't break the build in my system, but we'll see what
the buildfarm has to say about it all.
Author: Emre Hasegeli
Discussion: https://postgr.es/m/CAE2gYzyc0+5uG+Cd9-BSL7NKC8LSHLNg1Aq2=8ubjnUwut4_iw@mail.gmail.com
On Windows, it is sometimes important for corresponding malloc() and
free() calls to be made from the same DLL, since some build options can
result in multiple allocators being active at the same time. For that
reason we already provided PQfreemem(). This commit adds a similar
function for freeing string results allocated by the pgtypes library.
Author: Takayuki Tsunakawa
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8AD5D6%40G01JPEXMBYT05
Use of strncpy with a length limit based on the source, rather than
the destination, is non-idiomatic and draws warnings from gcc 8.
Replace with memcpy, which does exactly the same thing in these cases,
but with less chance for confusion.
Backpatch to all supported branches.
Discussion: https://postgr.es/m/21789.1529170195@sss.pgh.pa.us
This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.
A small cosmetic piece of tidying of the pgperlcritic script is
included.
Mike Blackwell
Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
The "l" (ell) width spec means something in the corresponding scanf usage,
but not here. While modern POSIX says that applying "l" to "f" and other
floating format specs is a no-op, SUSv2 says it's undefined. Buildfarm
experience says that some old compilers emit warnings about it, and at
least one old stdio implementation (mingw's "ANSI" option) actually
produces wrong answers and/or crashes.
Discussion: https://postgr.es/m/21670.1526769114@sss.pgh.pa.us
Discussion: https://postgr.es/m/c085e1da-0d64-1c15-242d-c921f32e0d5c@dunslane.net