Transform the PQtrace output format from its ancient (and mostly
useless) byte-level output format to a logical-message-level output,
making it much more usable. This implementation allows the printing
code to be written (as it indeed was) by looking at the protocol
documentation, which gives more confidence that the three (docs, trace
code and actual code) actually match.
Author: 岩田 彩 (Aya Iwata) <iwata.aya@fujitsu.com>
Reviewed-by: 綱川 貴之 (Takayuki Tsunakawa) <tsunakawa.takay@fujitsu.com>
Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: 黒田 隼人 (Hayato Kuroda) <kuroda.hayato@fujitsu.com>
Reviewed-by: "Nagaura, Ryohei" <nagaura.ryohei@jp.fujitsu.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>
Reviewed-by: Jim Doty <jdoty@pivotal.io>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/71E660EB361DF14299875B198D4CE5423DE3FBA4@g01jpexmbkw25
Pipeline mode in libpq lets an application avoid the Sync messages in
the FE/BE protocol that are implicit in the old libpq API after each
query. The application can then insert Sync at its leisure with a new
libpq function PQpipelineSync. This can lead to substantial reductions
in query latency.
Co-authored-by: Craig Ringer <craig.ringer@enterprisedb.com>
Co-authored-by: Matthieu Garrigues <matthieu.garrigues@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Aya Iwata <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Vérité <daniel@manitou-mail.org>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Nikhil Sontakke <nikhils@2ndquadrant.com>
Reviewed-by: Vaishnavi Prabakaran <VaishnaviP@fast.au.fujitsu.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CAMsr+YFUjJytRyV4J-16bEoiZyH=4nj+sQ7JP9ajwz=B4dMMZw@mail.gmail.com
Discussion: https://postgr.es/m/CAJkzx4T5E-2cQe3dtv2R78dYFvz+in8PY7A8MArvLhs_pg75gg@mail.gmail.com
Commit 92785dac2 copied some logic related to advancement of inStart
from pqParseInput3 into getRowDescriptions and getAnotherTuple,
because it wanted to allow user-defined row processor callbacks to
potentially longjmp out of the library, and inStart would have to be
updated before that happened to avoid an infinite loop. We later
decided that that API was impossibly fragile and reverted it, but
we didn't undo all of the related code changes, and this bit of
messiness survived. Undo it now so that there's just one place in
pqParseInput3's processing where inStart is advanced; this will
simplify addition of better tracing support.
getParamDescriptions had grown similar processing somewhere along
the way (not in 92785dac2; I didn't track down just when), but it's
actually buggy because its handling of corrupt-message cases seems to
have been copied from the v2 logic where we lacked a known message
length. The cases where we "goto not_enough_data" should not simply
return EOF, because then we won't consume the message, potentially
creating an infinite loop. That situation now represents a
definitively corrupt message, and we should report it as such.
Although no field reports of getParamDescriptions getting stuck in
a loop have been seen, it seems appropriate to back-patch that fix.
I chose to back-patch all of this to keep the logic looking more alike
in supported branches.
Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
Based on an analysis of the OpenSSL code with Jacob, moving to EVP for
the cryptohash computations makes necessary the setup of the libcrypto
callbacks that were getting set only for SSL connections, but not for
connections without SSL. Not setting the callbacks makes the use of
threads potentially unsafe for connections calling cryptohashes during
authentication, like MD5 or SCRAM, if a failure happens during a
cryptohash computation. The logic setting the libssl and libcrypto
states is then split into two parts, both using the same locking, with
libcrypto being set up for SSL and non-SSL connections, while SSL
connections set any libssl state afterwards as needed.
Prior to this commit, only SSL connections would have set libcrypto
callbacks that are necessary to ensure a proper thread locking when
using multiple concurrent threads in libpq (ENABLE_THREAD_SAFETY). Note
that this is only required for OpenSSL 1.0.2 and 1.0.1 (oldest version
supported on HEAD), as 1.1.0 has its own internal locking and it has
dropped support for CRYPTO_set_locking_callback().
Tests with up to 300 threads with OpenSSL 1.0.1 and 1.0.2, mixing SSL
and non-SSL connection threads did not show any performance impact after
some micro-benchmarking. pgbench can be used here with -C and a
mostly-empty script (with one \set meta-command for example) to stress
authentication requests, and we have mixed that with some custom
programs for testing.
Reported-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com
This partially reverts 096bbf7 and 9d2d457, undoing the libpq changes as
it could cause breakages in distributions that share one single libpq
version across multiple major versions of Postgres for extensions and
applications linking to that.
Note that the backend is unchanged here, and it still disables SSL
compression while simplifying the underlying catalogs that tracked if
compression was enabled or not for a SSL connection.
Per discussion with Tom Lane and Daniel Gustafsson.
Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
The authtype parameter was deprecated and made inactive in commit
d5bbe2aca55bc8, but the environment variable was left defined and thus
tested with a getenv call even though the value is of no use. Also,
if it would exist it would be copied but never freed as the cleanup
code had been removed.
tty was deprecated in commit cb7fb3ca958ec8bd5a14e7 but most of the
infrastructure around it remained in place.
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/DDDF36F3-582A-4C02-8598-9B464CC42B34@yesql.se
Per buildfarm member crake, any servers including a postgres_fdw server
with this option set would fail to do a pg_upgrade properly as the
option got hidden in f9264d1 by becoming a debug option, making the
restore of the FDW server fail.
This changes back the option in libpq to be visible, but still inactive
to fix this upgrade issue.
Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since. Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that. The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.
This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.
Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.
A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.
Bump catalog version.
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
Protocol version 3 was introduced in PostgreSQL 7.4. There shouldn't be
many clients or servers left out there without version 3 support. But as
a courtesy, I kept just enough of the old protocol support that we can
still send the "unsupported protocol version" error in v2 format, so that
old clients can display the message properly. Likewise, libpq still
understands v2 ErrorResponse messages when establishing a connection.
The impetus to do this now is that I'm working on a patch to COPY
FROM, to always prefetch some data. We cannot do that safely with the
old protocol, because it requires parsing the input one byte at a time
to detect the end-of-copy marker.
Reviewed-by: Tom Lane, Alvaro Herrera, John Naylor
Discussion: https://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b%40iki.fi
In addition to the existing options of "any" and "read-write", we
now support "read-only", "primary", "standby", and "prefer-standby".
"read-write" retains its previous meaning of "transactions are
read-write by default", and "read-only" inverts that. The other
three modes test specifically for hot-standby status, which is not
quite the same thing. (Setting default_transaction_read_only on
a primary server renders it read-only to this logic, but not a
standby.)
Furthermore, if talking to a v14 or later server, no extra network
round trip is needed to detect the session's status; the GUC_REPORT
variables delivered by the server are enough. When talking to an
older server, a SHOW or SELECT query is issued to detect session
read-only-ness or server hot-standby state, as needed.
Haribabu Kommi, Greg Nancarrow, Vignesh C, Tom Lane; reviewed at
various times by Laurenz Albe, Takayuki Tsunakawa, Peter Smith.
Discussion: https://postgr.es/m/CAF3+xM+8-ztOkaV9gHiJ3wfgENTq97QcjXQt+rbFQ6F7oNzt9A@mail.gmail.com
Add another method to specify CRLs, hashed directory method, for both
server and client side. This offers a means for server or libpq to
load only CRLs that are required to verify a certificate. The CRL
directory is specifed by separate GUC variables or connection options
ssl_crl_dir and sslcrldir, alongside the existing ssl_crl_file and
sslcrl, so both methods can be used at the same time.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20200731.173911.904649928639357911.horikyota.ntt@gmail.com
The stanza in ECPGconnect() that intended to allow specification of a
Unix socket directory path in place of a port has never executed since
it was committed, nearly two decades ago; the preceding strrchr()
already found the last colon so there cannot be another one. The lack
of complaints about that is doubtless related to the fact that no
user-facing documentation suggested it was possible.
Rather than try to fix that up, let's just remove the unreachable
code, and instead document the way that does work to write a socket
directory path, namely specifying it as a "host" option.
In support of that, make another pass at clarifying the syntax
documentation for ECPG connection targets, particularly documenting
which things are parsed as identifiers and where to use double quotes.
Rearrange some things that seemed poorly ordered, and fix a couple of
minor doc errors.
Kyotaro Horiguchi, per gripe from Shenhao Wang
(docs changes mostly by me)
Discussion: https://postgr.es/m/ae52a416bbbf459c96bab30b3038e06c@G08CNEXMBPEKD06.g08.fujitsu.local
This commit makes more generic some comments and code related to the
compilation with OpenSSL and SSL in general to ease the addition of more
SSL implementations in the future. In libpq, some OpenSSL-only code is
moved under USE_OPENSSL and not USE_SSL.
While on it, make a comment more consistent in libpq-fe.h.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/5382CB4A-9CF3-4145-BA46-C802615935E0@yesql.se
This is a replacement for the existing --with-openssl, extending the
logic to make easier the addition of new SSL libraries. The grammar is
chosen to be similar to --with-uuid, where multiple values can be
chosen, with "openssl" as the only supported value for now.
The original switch, --with-openssl, is kept for compatibility.
Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/FAB21FC8-0F62-434F-AA78-6BD9336D630A@yesql.se
We had "short *mdy" in the extern declarations, but "short mdy[3]"
in the actual function definitions. Per C99 these are equivalent,
but recent versions of gcc have started to issue warnings about
the inconsistency. Clean it up before the warnings get any more
widespread.
Back-patch, in case anyone wants to build older PG versions with
bleeding-edge compilers.
Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
opt_distinct_clause is only used in PLpgSQL_Expr, which ecpg
ignores, so it needs to ignore opt_distinct_clause too.
My oversight in 7cd9765f9; reported by Bruce Momjian.
Discussion: https://postgr.es/m/E1l33wr-0005sJ-9n@gemulon.postgresql.org
libpq's error messages for connection failures pretty well stand on
their own, especially since commits 52a10224e/27a48e5a1. Prefixing
them with 'could not connect to database "foo"' or the like is just
redundant, and perhaps even misleading if the specific database name
isn't relevant to the failure. (When it is, we trust that the
backend's error message will include the DB name.) Indeed, psql
hasn't used any such prefix in a long time. So, make all our other
programs and documentation examples agree with psql's practice.
Discussion: https://postgr.es/m/1094524.1611266589@sss.pgh.pa.us
The callback for retrieving state change information during connection
setup was only installed when the connection was mostly set up, and
thus didn't provide much information and missed all the details related
to the handshake.
This also extends the callback with SSL_state_string_long() to print
more information about the state change within the SSL object handled.
While there, fix some comments which were incorrectly referring to the
callback and its previous location in fe-secure.c.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/232CF476-94E1-42F1-9408-719E2AEC5491@yesql.se
"connection to server so-and-so failed:" seems clearer than the
previous wording "could not connect to so-and-so:" (introduced by
52a10224e), because the latter suggests a network-level connection
failure. We're now prefixing this string to all types of connection
failures, for instance authentication failures; so we need wording
that doesn't imply a low-level error.
Per discussion with Robert Haas.
Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
If a server returns ERRCODE_CANNOT_CONNECT_NOW, try the next host,
if multiple host names have been provided. This allows dealing
gracefully with standby servers that might not be in hot standby mode
yet.
In the wake of the preceding commit, it might be plausible to retry
many more error cases than we do now, but I (tgl) am hesitant to
move too aggressively on that --- it's not clear it'd be desirable
for cases such as bad-password, for example. But this case seems
safe enough.
Hubert Zhang, reviewed by Takayuki Tsunakawa
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Prefix "could not connect to host-or-socket-path:" to all connection
failure cases that occur after the socket() call, and remove the
ad-hoc server identity data that was appended to a few of these
messages. This should produce much more intelligible error reports
in multiple-target-host situations, especially for error cases that
are off the beaten track to any degree (because none of those provided
any server identity info).
As an example of the change, formerly a connection attempt with a bad
port number such as "psql -p 12345 -h localhost,/tmp" might produce
psql: error: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 12345?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 12345?
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.12345"?
Now it looks like
psql: error: could not connect to host "localhost" (::1), port 12345: Connection refused
Is the server running on that host and accepting TCP/IP connections?
could not connect to host "localhost" (127.0.0.1), port 12345: Connection refused
Is the server running on that host and accepting TCP/IP connections?
could not connect to socket "/tmp/.s.PGSQL.12345": No such file or directory
Is the server running locally and accepting connections on that socket?
This requires adjusting a couple of regression tests to allow for
variation in the contents of a connection failure message.
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Add an optional callback to regression_main() that, if provided,
is invoked on each test output file before we try to compare it
to the expected-result file.
The main and isolation test programs don't need this (yet).
In pg_regress_ecpg, add a filter that eliminates target-host
details from "could not connect" error reports. This filter
doesn't do anything as of this commit, but it will be needed
by the next one.
In the long run we might want to provide some more general,
perhaps pattern-based, filtering mechanism for test output.
For now, this will solve the immediate problem.
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Previously, we had an undisciplined mish-mash of printfPQExpBuffer and
appendPQExpBuffer calls to report errors within libpq. This commit
establishes a uniform rule that appendPQExpBuffer[Str] should be used.
conn->errorMessage is reset only at the start of an application request,
and then accumulates messages till we're done. We can remove no less
than three different ad-hoc mechanisms that were used to get the effect
of concatenation of error messages within a sequence of operations.
Although this makes things quite a bit cleaner conceptually, the main
reason to do it is to make the world safer for the multiple-target-host
feature that was added awhile back. Previously, there were many cases
in which an error occurring during an individual host connection attempt
would wipe out the record of what had happened during previous attempts.
(The reporting is still inadequate, in that it can be hard to tell which
host got the failure, but that seems like a matter for a separate commit.)
Currently, lo_import and lo_export contain exceptions to the "never
use printfPQExpBuffer" rule. If we changed them, we'd risk reporting
an incidental lo_close failure before the actual read or write
failure, which would be confusing, not least because lo_close happened
after the main failure. We could improve this by inventing an
internal version of lo_close that doesn't reset the errorMessage; but
we'd also need a version of PQfn() that does that, and it didn't quite
seem worth the trouble for now.
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Invent new RawParseModes that allow the core grammar to handle
pl/pgsql expressions and assignments directly, and thereby get rid
of a lot of hackery in pl/pgsql's parser. This moves a good deal
of knowledge about pl/pgsql into the core code: notably, we have to
invent a CoercionContext that matches pl/pgsql's (rather dubious)
historical behavior for assignment coercions. That's getting away
from the original idea of pl/pgsql as an arm's-length extension of
the core, but really we crossed that bridge a long time ago.
The main advantage of doing this is that we can now use the core
parser to generate FieldStore and/or SubscriptingRef nodes to handle
assignments to pl/pgsql variables that are records or arrays. That
fixes a number of cases that had never been implemented in pl/pgsql
assignment, such as nested records and array slicing, and it allows
pl/pgsql assignment to support the datatype-specific subscripting
behaviors introduced in commit c7aba7c14.
There are cosmetic benefits too: when a syntax error occurs in a
pl/pgsql expression, the error report no longer includes the confusing
"SELECT" keyword that used to get prefixed to the expression text.
Also, there seem to be some small speed gains.
Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
This patch essentially allows gram.y to implement a family of related
syntax trees, rather than necessarily always parsing a list of SQL
statements. raw_parser() gains a new argument, enum RawParseMode,
to say what to do. As proof of concept, add a mode that just parses
a TypeName without any other decoration, and use that to greatly
simplify typeStringToTypeName().
In addition, invent a new SPI entry point SPI_prepare_extended() to
allow SPI users (particularly plpgsql) to get at this new functionality.
In hopes of making this the last variant of SPI_prepare(), set up its
additional arguments as a struct rather than direct arguments, and
promise that future additions to the struct can default to zero.
SPI_prepare_cursor() and SPI_prepare_params() can perhaps go away at
some point.
Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
Since at least 2001 we've used putenv() and avoided setenv(), on the
grounds that the latter was unportable and not in POSIX. However,
POSIX added it that same year, and by now the situation has reversed:
setenv() is probably more portable than putenv(), since POSIX now
treats the latter as not being a core function. And setenv() has
cleaner semantics too. So, let's reverse that old policy.
This commit adds a simple src/port/ implementation of setenv() for
any stragglers (we have one in the buildfarm, but I'd not be surprised
if that code is never used in the field). More importantly, extend
win32env.c to also support setenv(). Then, replace usages of putenv()
with setenv(), and get rid of some ad-hoc implementations of setenv()
wannabees.
Also, adjust our src/port/ implementation of unsetenv() to follow the
POSIX spec that it returns an error indicator, rather than returning
void as per the ancient BSD convention. I don't feel a need to make
all the call sites check for errors, but the portability stub ought
to match real-world practice.
Discussion: https://postgr.es/m/2065122.1609212051@sss.pgh.pa.us
The critical issue fixed here is that if a GSSAPI-encrypted connection
is successfully made, pqsecure_open_gss() cleared conn->allow_ssl_try,
as an admittedly-hacky way of preventing us from then trying to tunnel
SSL encryption over the already-encrypted connection. The problem
with that is that if we abandon the GSSAPI connection because of a
failure during authentication, we would not attempt SSL encryption
in the next try with the same server. This can lead to unexpected
connection failure, or silently getting a non-encrypted connection
where an encrypted one is expected.
Fortunately, we'd only manage to make a GSSAPI-encrypted connection
if both client and server hold valid tickets in the same Kerberos
infrastructure, which is a relatively uncommon environment.
Nonetheless this is a very nasty bug with potential security
consequences. To fix, don't reset the flag, instead adding a
check for conn->gssenc being already true when deciding whether
to try to initiate SSL.
While here, fix some lesser issues in libpq's GSSAPI code:
* Use the need_new_connection stanza when dropping an attempted
GSSAPI connection, instead of partially duplicating that code.
The consequences of this are pretty minor: AFAICS it could only
lead to auth_req_received or password_needed remaining set when
they shouldn't, which is not too harmful.
* Fix pg_GSS_error() to not repeat the "mprefix" it's given multiple
times, and to notice any failure return from gss_display_status().
* Avoid gratuitous dependency on NI_MAXHOST in
pg_GSS_load_servicename().
Per report from Mikael Gustavsson. Back-patch to v12 where
this code was introduced.
Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
ecpglib on certain platforms can't handle the pg_log_fatal calls from
libraries. This was reported by the buildfarm. It needs a refactoring
and return value change if it is later removed.
Backpatch-through: master
Two new routines to allocate a hash context and to free it are created,
as these become necessary for the goal behind this refactoring: switch
the all cryptohash implementations for OpenSSL to use EVP (for FIPS and
also because upstream does not recommend the use of low-level cryptohash
functions for 20 years). Note that OpenSSL hides the internals of
cryptohash contexts since 1.1.0, so it is necessary to leave the
allocation to OpenSSL itself, explaining the need for those two new
routines. This part is going to require more work to properly track
hash contexts with resource owners, but this not introduced here.
Still, this refactoring makes the move possible.
This reduces the number of routines for all SHA2 implementations from
twelve (SHA{224,256,386,512} with init, update and final calls) to five
(create, free, init, update and final calls) by incorporating the hash
type directly into the hash context data.
The new cryptohash routines are moved to a new file, called cryptohash.c
for the fallback implementations, with SHA2 specifics becoming a part
internal to src/common/. OpenSSL specifics are part of
cryptohash_openssl.c. This infrastructure is usable for more hash
types, like MD5 or HMAC.
Any code paths using the internal SHA2 routines are adapted to report
correctly errors, which are most of the changes of this commit. The
zones mostly impacted are checksum manifests, libpq and SCRAM.
Note that e21cbb4 was a first attempt to switch SHA2 to EVP, but it
lacked the refactoring needed for libpq, as done here.
This patch has been tested on Linux and Windows, with and without
OpenSSL, and down to 1.0.1, the oldest version supported on HEAD.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz
This is a variant of the normal Unix-domain sockets that don't use the
file system but a separate "abstract" namespace. At the user
interface, such sockets are represented by names starting with "@".
Supported on Linux and Windows right now.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/6dee8574-b0ad-fc49-9c8c-2edc796f0033@2ndquadrant.com
The buildfarm thinks this leads to memory stomps, though annoyingly
I can't duplicate that here. The existing code in strings.pgc is
doing something that doesn't seem to be sanctioned at all really
by the documentation, but I'm disinclined to try to make that nicer
right now. Let's just declare some more output variables in hopes
of working around it.
These were broken in multiple ways:
* The xbstart and xhstart lexer actions neglected to set
"state_before_str_start" before transitioning to the xb/xh states,
thus possibly resulting in "internal error: unreachable state" later.
* The test for valid string contents at the end of xb state was flat out
wrong, as it accounted incorrectly for the "b" prefix that the xbstart
action had injected. Meanwhile, the xh state had no such check at all.
* The generated literal value failed to include any quote marks.
* The grammar did the wrong thing anyway, typically ignoring the
literal value and emitting something else, since BCONST and XCONST
tokens were handled randomly differently from SCONST tokens.
The first of these problems is evidently an oversight in commit
7f380c59f, but the others seem to be very ancient. The lack of
complaints shows that ECPG users aren't using these syntaxes much
(although I do vaguely remember one previous complaint).
As written, this patch is dependent on 7f380c59f, so it can't go
back further than v13. Given the shortage of complaints, I'm not
excited about adapting the patch to prior branches.
Report and patch by Shenhao Wang (test case adjusted by me)
Discussion: https://postgr.es/m/d6402f1bacb74ecba22ef715dbba17fd@G08CNEXMBPEKD06.g08.fujitsu.local
On the same reasoning as in commit 36b931214, forbid using custom
oid_symbol macros in pg_type as well as pg_proc, so that we always
rely on the predictable macro names generated by genbki.pl.
We do continue to grant grandfather status to the names CASHOID and
LSNOID, although those are now considered deprecated aliases for the
preferred names MONEYOID and PG_LSNOID. This is because there's
likely to be client-side code using the old names, and this bout of
neatnik-ism doesn't quite seem worth breaking client code.
There might be a case for grandfathering EVTTRIGGEROID, too, since
externally-maintained PLs may reference that symbol. But renaming
such references to EVENT_TRIGGEROID doesn't seem like a particularly
heavy lift --- we make far more significant backend API changes in
every major release. For now I didn't add that, but we could
reconsider if there's pushback.
The other names changed here seem pretty unlikely to have any outside
uses. Again, we could add alias macros if there are complaints, but
for now I didn't.
As before, no need for a catversion bump.
John Naylor
Discussion: https://postgr.es/m/CAFBsxsHpCbjfoddNGpnnnY5pHwckWfiYkMYSF74PmP1su0+ZOw@mail.gmail.com
We must not set the "done" flag until after we've executed the
initialization function. Otherwise, other threads can fall through
the initial unlocked test before initialization is really complete.
This has been seen to cause rare failures of ecpg's thread/descriptor
test, and it could presumably cause other sorts of misbehavior in
threaded ECPG-using applications, since ecpglib relies on
pthread_once() in several places.
Diagnosis and patch by me, based on investigation by Alexander Lakhin.
Back-patch to all supported branches (the bug dates to 2007).
Discussion: https://postgr.es/m/16685-d6cd241872c101d3@postgresql.org
ECPG's PREPARE ... FROM and EXECUTE IMMEDIATE can optionally take
the target query as a simple literal, rather than the more usual
string-variable reference. This was previously documented as
being a C string literal, but that's a lie in one critical respect:
you can't write a data double quote as \" in such literals. That's
because the lexer is in SQL mode at this point, so it'll parse
double-quoted strings as SQL identifiers, within which backslash
is not special, so \" ends the literal.
I looked into making this work as documented, but getting the lexer
to switch behaviors at just the right point is somewhere between
very difficult and impossible. It's not really worth the trouble,
because these cases are next to useless: if you have a fixed SQL
statement to execute or prepare, you might as well write it as
a direct EXEC SQL, saving the messiness of converting it into
a string literal and gaining the opportunity for compile-time
SQL syntax checking.
Instead, let's just document (and test) the workaround of writing
a double quote as an octal escape (\042) in such cases.
There's no code behavioral change here, so in principle this could
be back-patched, but it's such a niche case I doubt it's worth
the trouble.
Per report from 1250kv.
Discussion: https://postgr.es/m/673825.1603223178@sss.pgh.pa.us
If you write the literal 'abc''def' in an EXEC SQL command, that will
come out the other end as 'abc'def', triggering a syntax error in the
backend. Likewise, "abc""def" is reduced to "abc"def" which is wrong
syntax for a quoted identifier.
The cause is that the lexer thinks it should emit just one quote
mark, whereas what it really should do is keep the string as-is.
Add some docs and test cases, too.
Although this seems clearly a bug, I fear users wouldn't appreciate
changing it in minor releases. Some may well be working around it
by applying an extra doubling of affected quotes, as for example
sql/dyntest.pgc has been doing.
Per investigation of a report from 1250kv, although this isn't
exactly what he/she was on about.
Discussion: https://postgr.es/m/673825.1603223178@sss.pgh.pa.us
After de8feb1f3a23465b5737e8a8c160e8ca62f61339, some warnings remained
that were only visible when using GCC on Windows. Fix those as well.
Note that the ecpg test source files don't use the full pg_config.h,
so we can't use pg_funcptr_t there but have to do it the long way.