Until recently 'native' was implicitly included via 'orcjit', but a change
included in LLVM 11 (not yet released) removed a number of such indirect
component references.
Reported-By: Fabien COELHO <coelho@cri.ensmp.fr>
Reported-By: Andres Freund <andres@anarazel.de>
Reported-By: Thomas Munro <thomas.munro@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20201201064949.mex6kvi2kygby3ni@alap3.anarazel.de
Backpatch: 11-, where jit support was added
Checking for DocBook being installed was valuable when we were on the
OpenSP docs toolchain, because that was rather hard to get installed
fully. Nowadays, as long as you have xmllint and xsltproc installed,
you're good, because those programs will fetch the DocBook files off
the net at need. Moreover, testing this at configure time means that
a network access may well occur whether or not you have any interest
in building the docs later. That can be slow (typically 2 or 3
seconds, though much higher delays have been reported), and it seems
not very nice to be doing an off-machine access without warning, too.
Hence, drop the PGAC_CHECK_DOCBOOK probe, and adjust related
documentation. Without that macro, there's not much left of
config/docbook.m4 at all, so I just removed it.
Back-patch to v11, where we started to use xmllint in the
PGAC_CHECK_DOCBOOK probe.
Discussion: https://postgr.es/m/E2EE6B76-2D96-408A-B961-CAE47D1A86F0@yesql.se
Discussion: https://postgr.es/m/A55A7FC9-FA60-47FE-98B5-139CDC57CE6E@gmail.com
Theoretically one could go into src/test/thread and build/run this
program there. In practice, that hasn't worked since 96bf88d52,
and probably much longer on some platforms (likely including just
the sort of hoary leftovers where this test might be of interest).
While it wouldn't be too hard to repair the breakage, the fact that
nobody has noticed for two years shows that there is zero usefulness
in maintaining this build pathway. Let's get rid of it and decree
that thread_test.c is *only* meant to be built/used in configure.
Given that decision, it makes sense to put thread_test.c under config/
and get rid of src/test/thread altogether, so that's what I did.
In passing, update src/test/README, which had been ignored by some
not-so-recent additions of subdirectories.
Discussion: https://postgr.es/m/227659.1603041612@sss.pgh.pa.us
Make header/trailer comments agree with the actual names of some macros.
These seem like legit names in earlier iterations of respective patches
(commit b779168ff "Detect PG_PRINTF_ATTRIBUTE automatically." and
commit 6869b4f25 "Add C++ support to configure.") but the macro had
been renamed out of sync with the header / trailer comment in the final
committed patch.
Even more nitpickily, make the dashed underlines agree with the lengths
of the macro names everyplace. There doesn't seem to have been any
meeting of the minds previously on whether those should match or not,
but at least some people have been trying to make 'em match.
Jesse Zhang, Tom Lane
Discussion: https://postgr.es/m/CAGf+fX7DDyq6WfCy6X_KtD28MkbNBE6NkRi26fSf25dfUwX0zw@mail.gmail.com
As of Windows 10 version 1803, Unix-domain sockets are supported on
Windows. But it's not automatically detected by configure because it
looks for struct sockaddr_un and Windows doesn't define that. So we
just make our own definition on Windows and override the configure
result.
Set DEFAULT_PGSOCKET_DIR to empty on Windows so by default no
Unix-domain socket is used, because there is no good standard
location.
In pg_upgrade, we have to do some extra tweaking to preserve the
existing behavior of not using Unix-domain sockets on Windows. Adding
support would be desirable, but it needs further work, in particular a
way to select whether to use Unix-domain sockets from the command-line
or with a run-time test.
The pg_upgrade test script needs a fix. The previous code passed
"localhost" to postgres -k, which only happened to work because
Windows used to ignore the -k argument value altogether. We instead
need to pass an empty string to get the desired effect.
The test suites will continue to not use Unix-domain sockets on
Windows. This requires a small tweak in pg_regress.c. The TAP tests
don't need to be changed because they decide by the operating system
rather than HAVE_UNIX_SOCKETS.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/54bde68c-d134-4eb8-5bd3-8af33b72a010@2ndquadrant.com
These compiler features are required by C99, so remove the configure
probes for them.
This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code. I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.
Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
I had supposed that all versions of Readline that have filename
quoting hooks also have the rl_completion_suppress_quote variable.
But it seems OpenBSD managed to find a version someplace that does
not, so we'll have to expend a separate configure probe for that.
(Light testing suggests that this version also lacks the bugs that
make it necessary to frob that variable. Hooray!)
Per buildfarm.
The Readline library contains a fair amount of knowledge about how to
tab-complete filenames, but it turns out that that doesn't work too well
unless we follow its expectation that we use its filename quoting hooks
to quote and de-quote filenames. We were trying to do such quote handling
within complete_from_files(), and that's still what we have to do if we're
using libedit, which lacks those hooks. But for Readline, it works a lot
better if we tell Readline that single-quote is a quoting character and
then provide hooks that know the details of the quoting rules for SQL
and psql meta-commands.
Hence, resurrect the quoting hook functions that existed in the original
version of tab-complete.c (and were disabled by commit f6689a328 because
they "didn't work so well yet"), and whack on them until they do seem to
work well.
Notably, this fixes bug #16059 from Steven Winfield, who pointed out
that the previous coding would strip quote marks from filenames in SQL
COPY commands, even though they're syntactically necessary there.
Now, we not only don't do that, but we'll add a quote mark when you
tab-complete, even if you didn't type one.
Getting this to work across a range of libedit versions (and, to a
lesser extent, libreadline versions) was depressingly difficult.
It will be interesting to see whether the new regression test cases
pass everywhere in the buildfarm.
Some future patch might try to handle quoted SQL identifiers with
similar explicit quoting/dequoting logic, but that's for another day.
Patch by me, reviewed by Peter Eisentraut.
Discussion: https://postgr.es/m/16059-8836946734c02b84@postgresql.org
Supporting very old Python versions is a maintenance burden,
especially with the several variant test files to maintain for Python
<2.6.
Since we have dropped support for older OpenSSL versions in
7b283d0e1d, RHEL 5 is now effectively
desupported, and that was also the only mainstream operating system
still using Python versions before 2.6, so it's a good time to drop
those as well.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
Currently only frogmouth in the buildfarm uses the 32bit params, and
it's not able to build past release 10, so put those last, saving
substantial configure time on more modern systems. Even if we get a
modern 32 bit Windows system at some stage we should probably prefer the
64 bit interface here these days.
Test for the compiler builtins __builtin_clz, __builtin_ctz, and
__builtin_popcount, and make use of these in preference to
handwritten C code if they're available. Create src/port
infrastructure for "leftmost one", "rightmost one", and "popcount"
so as to centralize these decisions.
On x86_64, __builtin_popcount generally won't make use of the POPCNT
opcode because that's not universally supported yet. Provide code
that checks CPUID and then calls POPCNT via asm() if available.
This requires indirecting through a function pointer, which is
an annoying amount of overhead for a one-instruction operation,
but it's probably not worth working harder than this for our
current use-cases.
I'm not sure we've found all the existing places that could profit
from this new infrastructure; but we at least touched all the
ones that used copied-and-pasted versions of the bitmapset.c code,
and got rid of multiple copies of the associated constant arrays.
While at it, replace c-compiler.m4's one-per-builtin-function
macros with a single one that can handle all the cases we need
to worry about so far. Also, because I'm paranoid, make those
checks into AC_LINK checks rather than just AC_COMPILE; the
former coding failed to verify that libgcc has support for the
builtin, in cases where it's not inline code.
David Rowley, Thomas Munro, Alvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
This reverts commits fc6c72747a, 109de05cbb, d0b4663c23 and
711bab1e4d.
Somebody will have to try harder before submitting this patch again.
I've spent entirely too much time on it already, and the #ifdef maze yet
to be written in order for it to build at all got on my nerves. The
amount of work needed to get a platform-specific performance improvement
that's barely above the noise level is not worth it.
Split out these new functions in three parts: one in a new file that
uses the compiler builtin and gets compiled with the -mpopcnt compiler
option if it exists; another one that uses the compiler builtin but not
the compiler option; and finally the fallback with open-coded
algorithms.
Split out the configure logic: in the original commit, it was selecting
to use the -mpopcnt compiler switch together with deciding whether to
use the compiler builtin, but those two things are really separate.
Split them out. Also, expose whether the builtin exists to
Makefile.global, so that src/port's Makefile can decide whether to
compile the hw-optimized file.
Remove CPUID test for CTZ/CLZ. Make pg_{right,left}most_ones use either
the compiler intrinsic or open-coded algo; trying to use the
HW-optimized version is a waste of time. Make them static inline
functions.
Discussion: https://postgr.es/m/20190213221719.GA15976@alvherre.pgsql
We were using uint64 function arguments as "long int" arguments to
compiler builtins, which fails on machines where long ints are 32 bits:
the upper half of the uint64 was being ignored. Fix by using the "ll"
builtin variants instead, which on those machines take 64 bit arguments.
Also, remove configure tests for __builtin_popcountl() (as well as
"long" variants for ctz and clz): the theory here is that any compiler
version will provide all widths or none, so one test suffices. Were
this theory to be wrong, we'd have to add tests for
__builtin_popcountll() and friends, which would be tedious.
Per failures in buildfarm member lapwing and ensuing discussion.
These opcodes have been around in the AMD world since 2007, and 2008 in
the case of intel. They're supported in GCC and Clang via some __builtin
macros. The opcodes may be unavailable during runtime, in which case we
fall back on a C-based implementation of the code. In order to get the
POPCNT instruction we must pass the -mpopcnt option to the compiler. We
do this only for the pg_bitutils.c file.
David Rowley (with fragments taken from a patch by Thomas Munro)
Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
The comment marker "#" is copied to the output, so it's only
appropriate for comments that make sense in the shell output. For
comments about the Autoconf language, "dnl" should be used.
AC_ARG_VAR is necessary if an environment variable influences a
configure result that is then used by other tests that are cached.
With AC_ARG_VAR, a change in the variable is detected on subsequent
configure runs and the user is then advised to remove the cache.
This adds AC_ARG_VAR calls for: MSGFMT, PERL, PYTHON, TCLSH, XML2_CONFIG
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/30672.1546816567@sss.pgh.pa.us
This change doesn't fix any bugs that we've heard about, but it seems
like a good idea on general principles to track upstream occasionally.
Discussion: https://postgr.es/m/3320.1542647565@sss.pgh.pa.us
Calling AC_CHECK_DECLS before we've finished setting up the compiler's
CFLAGS seems like a pretty risky proposition, especially now that the
first use of that macro will result in a test to see whether the compiler
gives warning or error for undeclared built-in functions. That answer
could very easily get changed later than where PGAC_LLVM_SUPPORT is
called; furthermore, it's hardly unlikely that flags such as -D_GNU_SOURCE
could change visibility of declarations. Hence, be a little less cavalier
about where to do LLVM-related tests. This results in v11 and HEAD doing
the warning-or-error check at the same place in the script as older
branches are doing it, which seems like a good thing.
Per further thought about commits 0b59b0e8b and 16fbac39f.
The test case that Autoconf uses to discover whether a function has
been declared doesn't work reliably with clang, because clang reports
a warning not an error if the name is a known built-in function.
On some platforms, this results in a lot of compile-time warnings about
strlcpy and related functions not having been declared.
There is a fix for this (by Noah Misch) in the upstream Autoconf sources,
but since they've not made a release in years and show no indication of
doing so anytime soon, let's just absorb their fix directly. We can
revert this when and if we update to a newer Autoconf release.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/26819.1542515567@sss.pgh.pa.us
NetBSD-current generates a large number of warnings about "%m" not
being appropriate to use with *printf functions. While that's true
for their native printf, it's surely not true for snprintf.c, so I
think they have misunderstood gcc's definition of the "gnu_printf"
archetype. Nonetheless, choosing "__syslog__" instead silences the
warnings; so teach configure about that.
Since this is only a cosmetic warning issue (and anyway it depends
on previous hacking to be self-consistent), no back-patch.
Discussion: https://postgr.es/m/16785.1539046036@sss.pgh.pa.us
The method we've traditionally used, of redeclaring strerror_r() to
see if the compiler complains of inconsistent declarations, turns out
not to work reliably because some compilers only report a warning,
not an error. Amazingly, this has gone undetected for years, even
though it certainly breaks our detection of whether strerror_r
succeeded.
Let's instead test whether the compiler will take the result of
strerror_r() as a switch() argument. It's possible this won't
work universally either, but it's the best idea I could come up with
on the spur of the moment.
We should probably back-patch this once the dust settles, but
first let's see what the buildfarm thinks of it.
Discussion: https://postgr.es/m/10877.1537993279@sss.pgh.pa.us
We've spent an awful lot of effort over the years in coping with
platform-specific vagaries of the *printf family of functions. Let's just
forget all that mess and standardize on always using src/port/snprintf.c.
This gets rid of a lot of configure logic, and it will allow a saner
approach to dealing with %m (though actually changing that is left for
a follow-on patch).
Preliminary performance testing suggests that as it stands, snprintf.c is
faster than the native printf functions for some tasks on some platforms,
and slower for other cases. A pending patch will improve that, though
cases with floating-point conversions will doubtless remain slower unless
we want to put a *lot* of effort into that. Still, we've not observed
that *printf is really a performance bottleneck for most workloads, so
I doubt this matters much.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
Apple's latest rearrangements of the system-supplied headers have broken
building of PL/Perl and PL/Tcl. The only practical way to fix PL/Tcl is to
start using the "-isysroot" compiler flag to point to SDK-supplied headers,
as Apple expects. We must also start distinguishing where to find Perl's
headers from where to find its shared library; but that seems like good
cleanup anyway.
Extensions that formerly did something like -I$(perl_archlibexp)/CORE
should now do -I$(perl_includedir)/CORE instead. perl_archlibexp
is still the place to look for libperl.so, though.
If for some reason you don't like the default -isysroot setting, you can
override that by setting PG_SYSROOT in configure's arguments. I don't
currently think people would need to do so, unless maybe for cross-version
build purposes.
In addition, teach configure where to find tclConfig.sh. Our traditional
method of searching $auto_path hasn't worked for the last couple of macOS
releases, and it now seems clear that Apple's not going to change that.
The workaround of manually specifying --with-tclconfig was annoying
already, but Mojave's made it a lot more so because the sysroot path now
has to be included as well. Let's just wire the knowledge into configure
instead. To avoid breaking builds against non-default Tcl installations
(e.g. MacPorts) wherein the $auto_path method probably still works,
arrange to try the additional case only after all else has failed.
Back-patch to all supported versions, since at least the buildfarm
cares about that. The changes are set up to not do anything on macOS
releases that are old enough to not have functional sysroot trees.
Before this commit LLVM 7 was supported, but only if one explicitly
provided LLVM_CONFIG= and CLANG= paths. As LLVM 7 is the first
version that includes our upstreamed debugging and profiling features,
and as debian is planning to default to 7 due to wider architecture
support, it seems good to support auto-detecting that version.
Author: Christoph Berg
Discussion: https://postgr.es/m/20180912124517.GD24584@msg.df7cb.de
Backpatch: 11, where LLVM was introduced
Since commit e1d19c902, buildfarm member gharial has been failing with
symptoms indicating that snprintf sometimes returns -1 for buffer
overrun, even though it passes the added configure check. Some
google research suggests that this happens only in limited cases,
such as when the overrun happens partway through a %d item. Adjust
the configure check to exercise it that way. Since I'm now feeling
more paranoid than I was before, also make the test explicitly verify
that the buffer doesn't get physically overrun.
Since our substitute snprintf now returns a C99-compliant result,
there's no need anymore to have complicated code to cope with pre-C99
behavior. We can just make configure substitute snprintf.c if it finds
that the system snprintf() is pre-C99. (Note: I do not believe that
there are any platforms where this test will trigger that weren't
already being rejected due to our other C99-ish feature requirements for
snprintf. But let's add the check for paranoia's sake.) Then, simplify
the call sites that had logic to cope with the pre-C99 definition.
I also dropped some stuff that was being paranoid about the possibility
of snprintf overrunning the given buffer. The only reports we've ever
heard of that being a problem were for Solaris 7, which is long dead,
and we've sure not heard any reports of these assertions triggering in
a long time. So let's drop that complexity too.
Likewise, drop some code that wasn't trusting snprintf to set errno
when it returns -1. That would be not-per-spec, and again there's
no real reason to believe it is a live issue, especially not for
snprintfs that pass all of configure's feature checks.
Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
This reverts commit 3a60c8ff89. Buildfarm
results show that that caused a whole bunch of new warnings on platforms
where gcc believes the local printf to be non-POSIX-compliant. This
problem outweighs the hypothetical-anyway possibility of getting warnings
for misuse of %m. We could use gnu_printf archetype when we've substituted
src/port/snprintf.c, but that brings us right back to the problem of not
getting warnings for %m.
A possible answer is to attack it in the other direction by insisting
that %m support be included in printf's feature set, but that will take
more investigation. In the meantime, revert the previous change, and
update the comment for PGAC_C_PRINTF_ARCHETYPE to more fully explain
what's going on.
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
The elog/ereport family of functions certainly support the %m format spec,
because they implement it "by hand". But elsewhere we have printf wrappers
that might or might not allow it depending on whether the platform's printf
does. (Most non-glibc versions don't, and notably, src/port/snprintf.c
doesn't.) Hence, rather than using the gnu_printf format archetype
interchangeably for all these functions, use it only for elog/ereport.
This will allow us to get compiler warnings for mistakes like the ones
fixed in commit a13b47a59, at least on platforms where printf doesn't
take %m and gcc is correctly configured to know it. (Unfortunately,
that won't happen on Linux, nor on macOS according to my testing.
It remains to be seen what the buildfarm's gcc-on-Windows animals will
think of this, but we may well have to rely on less-popular platforms
to warn us about unportable code of this kind.)
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
During the work of upstreaming my previous patches for gdb and perf
support the API changed. Adapt. Normally this wouldn't necessarily be
something to backpatch, but the previous API wasn't upstream, and at
least the gdb support is quite useful for debugging.
Author: Andres Freund
Backpatch: 11, where LLVM based JIT support was added.
We used to claim to support platforms using 'q' or 'I64' as the printf
length modifier for long long int, by dint of replacing snprintf with
our own code which uses the C99 standard 'll' modifier. But that is
only adequate if we use INT64_MODIFIER only in snprintf-based calls,
not directly with the platform's native printf or fprintf. Which
hasn't been the case for years. We had not noticed, partially because
of inadequate test coverage, and partially because the buildfarm is
almost completely bare of machines that won't take 'll'. The last
one seems to have been frogmouth, which was adjusted recently so that
it will take 'll'. We might as well just give up on the pretense
that anything else works, and save ourselves some configure cycles.
Discussion: https://postgr.es/m/13103.1526749980@sss.pgh.pa.us
Discussion: https://postgr.es/m/24769.1526772680@sss.pgh.pa.us
ARMv8 introduced special CPU instructions for calculating CRC-32C. Use
them, when available, for speed.
Like with the similar Intel CRC instructions, several factors affect
whether the instructions can be used. The compiler intrinsics for them must
be supported by the compiler, and the instructions must be supported by the
target architecture. If the compilation target architecture does not
support the instructions, but adding "-march=armv8-a+crc" makes them
available, then we compile the code with a runtime check to determine if
the host we're running on supports them or not.
For the runtime check, use glibc getauxval() function. Unfortunately,
that's not very portable, but I couldn't find any more portable way to do
it. If getauxval() is not available, the CRC instructions will still be
used if the target architecture supports them without any additional
compiler flags, but the runtime check will not be available.
Original patch by Yuqi Gu, heavily modified by me. Reviewed by Andres
Freund, Thomas Munro.
Discussion: https://www.postgresql.org/message-id/HE1PR0801MB1323D171938EABC04FFE7FA9E3110%40HE1PR0801MB1323.eurprd08.prod.outlook.com
We already do this for Perl and some other interesting tools, so it
seems sensible to do it for Python as well, especially since the
sub-release number is never determinable from other configure output
and even the major/minor numbers may not be clear without excavation
in config.log.
While at it, get rid of the code's assumption that both the major and
minor numbers contain exactly one digit. That will foreseeably be
broken by Python 3.10 in perhaps four or five years. That's far enough
out that we probably don't need to back-patch this.
Discussion: https://postgr.es/m/2186.1522681145@sss.pgh.pa.us
LLVM will be used for *optional* Just-in-time compilation
support. This commit just adds the configure infrastructure that
detects LLVM.
No documentation has been added for the --with-llvm flag, that'll be
added after the actual supporting code has been added.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
This is an optional dependency. It'll be used for the upcoming LLVM
based just in time compilation support, which needs to wrap a few LLVM
C++ APIs so they're accessible from C..
For now test for C++ compilers unconditionally, without failing if not
present, to ensure wide buildfarm coverage. If we're bothered by the
additional test times (which are quite short) or verbosity, we can
later make the tests conditional on --with-llvm.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
The new macro allows to test flags for different compilers and to
store them in different CFLAG like variables. The existing
PGAC_PROG_CC_CFLAGS_OPT and PGAC_PROG_CC_VAR_OPT are changed to be
just wrappers around the new function.
This'll be used by the upcoming LLVM support, to separately detect
capabilities used by clang, when generating bitcode.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
On Sparc64, use of __attribute__(aligned(8)) with __int128 causes faulty
code generation in gcc versions at least through 5.5.0. We can work around
that by disabling use of __int128, so teach configure to test for the bug.
This solution doesn't fix things for the case of cross-compiling with a
buggy compiler; to support that nicely, we'd need to add a manual disable
switch. Unless more such cases turn up, it doesn't seem worth the work.
Affected users could always edit pg_config.h manually.
In passing, fix some typos in the existing configure test for __int128.
They're harmless because we only compile that code not run it, but
they're still confusing for anyone looking at it closely.
This is needed in support of commit 751804998, so back-patch to 9.5
as that was.
Marina Polyakova, Victor Wagner, Tom Lane
Discussion: https://postgr.es/m/0d3a9fa264cebe1cb9966f37b7c06e86@postgrespro.ru
On some systems the results of 64 bit __builtin_mul_overflow()
operations can be computed at compile time, but not at runtime. The
known cases are arm buildfar animals using clang where the runtime
operation is implemented in a unavailable function.
Try to avoid compile-time computation by using volatile arguments to
__builtin_mul_overflow(). In that case we hopefully will get a link
error when unavailable, similar to what buildfarm animals dangomushi
and gull are reporting.
Author: Andres Freund
Discussion: https://postgr.es/m/20171213213754.pydkyjs6bt2hvsdb@alap3.anarazel.de
Commit 9fa6f00b1 assumed that __builtin_constant_p("string literal")
is TRUE, if the compiler has that function at all. Buildfarm results
show that Sun Studio 12, at least, breaks that assumption. Removing
that usage would leave us with no mechanical check for a very fragile
coding requirement, so instead teach configure to ignore
__builtin_constant_p() if it doesn't behave that way. We could
complicate matters by distinguishing three cases (no such function,
vs does, vs doesn't work for string literals); but for now, that seems
unnecessary because our other existing uses of this function are just
fairly minor optimizations of non-returning elog/ereport. We can live
without that on the small population of compilers that act this way.
Discussion: https://postgr.es/m/22997.1513264066@sss.pgh.pa.us
It's not easy to get signed integer overflow checks correct and
fast. Therefore abstract the necessary infrastructure into a common
header providing addition, subtraction and multiplication for 16, 32,
64 bit signed integers.
The new macros aren't yet used, but a followup commit will convert
several open coded overflow checks.
Author: Andres Freund, with some code stolen from Greg Stark
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
Commits 5a5c2feca3 and
b5178c5d08 introduced support for modern
MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl
distributions like Strawberry Perl and modern ActivePerl. Perl has no
robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so
test this. Back-patch to 9.3 (all supported versions).
The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when
$Config{gccversion} is nonempty. That banks on every gcc-built Perl
using the same ABI. gcc could change its default ABI the way MSVC once
did, and one could build Perl with gcc and the non-default ABI.
The GNU make build system could benefit from a similar test, without
which it does not support MSVC-built Perl. For now, just add a comment.
Most users taking the special step of building Perl with MSVC probably
build PostgreSQL with MSVC.
Discussion: https://postgr.es/m/20171130041441.GA3161526@rfd.leadboat.com
This is necessary for ActivePerl 5.18 onwards and for Strawberry Perl.
It is not sufficient for 32-bit builds with newer Visual Studio; these
fail with error LINK2026. Back-patch to 9.3 (all supported versions).
Reported by Victor Wagner.
Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
Since some preparation work had already been done, the only source
changes left were changing empty-element tags like <xref linkend="foo">
to <xref linkend="foo"/>, and changing the DOCTYPE.
The source files are still named *.sgml, but they are actually XML files
now. Renaming could be considered later.
In the build system, the intermediate step to convert from SGML to XML
is removed. Everything is build straight from the source files again.
The OpenSP (or the old SP) package is no longer needed.
The documentation toolchain instructions are updated and are much
simpler now.
Peter Eisentraut, Alexander Lakhin, Jürgen Purtz
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.
Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break. Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.
The only way out of the box is to make type int128 conform to the system's
alignment assumptions. Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.
Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.
Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.
This will need to be back-patched, along with the preparatory commit
91aec93e6. But let's see what the buildfarm makes of it first.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
Upcoming patches are going to address performance issues that involve
slow system provided ntohs/htons etc. To address that expand
pg_bswap.h to provide pg_ntoh{16,32,64}, pg_hton{16,32,64} and
optimize their respective implementations by using compiler intrinsics
for gcc compatible compilers and msvc. Fall back to manual
implementations using shifts etc otherwise.
Additionally remove multiple evaluation hazards from the existing
BSWAP32/64 macros, by replacing them with inline functions when
necessary. In the course of that the naming scheme is changed to
pg_bswap16/32/64.
Author: Andres Freund
Discussion: https://postgr.es/m/20170927172019.gheidqy6xvlxb325@alap3.anarazel.de
Commit 3c163a7fc's original choice to ignore all #define symbols whose
names begin with underscore turns out to be too simplistic. On Windows,
some Perl installations are built with -D_USE_32BIT_TIME_T, and we must
absorb that or we get the wrong result for sizeof(PerlInterpreter).
This effectively re-reverts commit ef58b87df, which injected that symbol
in a hacky way, making it apply to all of Postgres not just PL/Perl.
More significantly, it did so on *all* 32-bit Windows builds, even when
the Perl build to be used did not select this option; so that it fails
to work properly with some newer Perl builds.
By making this change, we would be introducing an ABI break in 32-bit
Windows builds; but fortunately we have not used type time_t in any
exported Postgres APIs in a long time. So it should be OK, both for
PL/Perl itself and for third-party extensions, if an extension library
is built with a different _USE_32BIT_TIME_T setting than the core code.
Patch by me, based on research by Ashutosh Sharma and Robert Haas.
Back-patch to all supported branches, as commit 3c163a7fc was.
Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
Peter Eisentraut noted that commit 40b9f1921 had broken a configure
behavior that some people might rely on: AC_CHECK_PROGS(FOO,...) will
allow the search to be overridden by specifying a value for FOO on
configure's command line or in its environment, but AC_PATH_PROGS(FOO,...)
accepts such an override only if it's an absolute path. We had worked
around that behavior for some, but not all, of the pre-existing uses
of AC_PATH_PROGS by just skipping the macro altogether when FOO is
already set. Let's standardize on that workaround for all uses of
AC_PATH_PROGS, new and pre-existing, by wrapping AC_PATH_PROGS in a
new macro PGAC_PATH_PROGS. While at it, fix a deficiency of the old
workaround code by making sure we report the setting to configure's log.
Eventually I'd like to improve PGAC_PATH_PROGS so that it converts
non-absolute override inputs to absolute form, eg "PYTHON=python3"
becomes, say, PYTHON = /usr/bin/python3. But that will take some
nontrivial coding so it doesn't seem like a thing to do in late beta.
Discussion: https://postgr.es/m/90a92a7d-938e-507a-3bd7-ecd2b4004689@2ndquadrant.com
Previously we had a mix of uses of AC_CHECK_PROG[S] and AC_PATH_PROG[S].
The only difference between those macros is that the latter emits the
full path to the program it finds, eg "/usr/bin/prove", whereas the
former emits just "prove". Let's standardize on always emitting the
full path; this is better for documentation of the build, and it might
prevent some types of failures if later build steps are done with
a different PATH setting.
I did not touch the AC_CHECK_PROG[S] calls in ax_pthread.m4 and
ax_prog_perl_modules.m4. There seems no need to make those diverge from
upstream, since we do not record the programs sought by the former, while
the latter's call to AC_CHECK_PROG(PERL,...) will never be reached.
Discussion: https://postgr.es/m/25937.1501433410@sss.pgh.pa.us
The Perl documentation is very clear that stuff calling libperl should
be built with the compiler switches shown by Perl's $Config{ccflags}.
We'd been ignoring that up to now, and mostly getting away with it,
but recent Perl versions contain ABI compatibility cross-checks that
fail on some builds because of this omission. In particular the
sizeof(PerlInterpreter) can come out different due to some fields being
added or removed; which means we have a live ABI hazard that we'd better
fix rather than continuing to sweep it under the rug.
However, it still seems like a bad idea to just absorb $Config{ccflags}
verbatim. In some environments Perl was built with a different compiler
that doesn't even use the same switch syntax. -D switch syntax is pretty
universal though, and absorbing Perl's -D switches really ought to be
enough to fix the problem.
Furthermore, Perl likes to inject stuff like -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64 into $Config{ccflags}, which affect libc ABIs on
platforms where they're relevant. Adopting those seems dangerous too.
It's unclear whether a build wherein Perl and Postgres have different ideas
of sizeof(off_t) etc would work, or whether anyone would care about making
it work. But it's dead certain that having different stdio ABIs in
core Postgres and PL/Perl will not work; we've seen that movie before.
Therefore, let's also ignore -D switches for symbols beginning with
underscore. The symbols that we actually need to import should be the ones
mentioned in perl.h's PL_bincompat_options stanza, and none of those start
with underscore, so this seems likely to work. (If it turns out not to
work everywhere, we could consider intersecting the symbols mentioned in
PL_bincompat_options with the -D switches. But that will be much more
complicated, so let's try this way first.)
This will need to be back-patched, but first let's see what the
buildfarm makes of it.
Ashutosh Sharma, some adjustments by me
Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
The TAP tests mostly don't work without IPC::Run, and the reason for
the failure is not immediately obvious from the error messages you get.
So teach configure to reject --enable-tap-tests unless IPC::Run exists.
Mostly this just involves adding ax_prog_perl_modules.m4 from the GNU
autoconf archives.
This was discussed last year, but we held off on the theory that we might
be switching to CMake soon. That's evidently not happening for v10,
so let's absorb this now.
Eugene Kazakov and Michael Paquier
Discussion: https://postgr.es/m/56BDDC20.9020506@postgrespro.ru
Discussion: https://postgr.es/m/CAB7nPqRVKG_CR4Dy_AMfE6DXcr6F7ygy2goa2atJU4XkerDRUg@mail.gmail.com
All documentation is now built using XSLT. Remove all references to
Jade, DSSSL, also JadeTex and some other outdated tooling.
For chunked HTML builds, this changes nothing, but removes the
transitional "oldhtml" target. The single-page HTML build is ported
over to XSLT. For PDF builds, this removes the JadeTex builds and moves
the FOP builds in their place.
copyObject() is declared to return void *, which allows easily assigning
the result independent of the input, but it loses all type checking.
If the compiler supports typeof or something similar, cast the result to
the input type. This creates a greater amount of type safety. In some
cases, where the result is assigned to a generic type such as Node * or
Expr *, new casts are now necessary, but in general casts are now
unnecessary in the normal case and indicate that something unusual is
happening.
Reviewed-by: Mark Dilger <hornschnorter@gmail.com>
Add a column collprovider to pg_collation that determines which library
provides the collation data. The existing choices are default and libc,
and this adds an icu choice, which uses the ICU4C library.
The pg_locale_t type is changed to a union that contains the
provider-specific locale handles. Users of locale information are
changed to look into that struct for the appropriate handle to use.
Also add a collversion column that records the version of the collation
when it is created, and check at run time whether it is still the same.
This detects potentially incompatible library upgrades that can corrupt
indexes and other structures. This is currently only supported by
ICU-provided collations.
initdb initializes the default collation set as before from the `locale
-a` output but also adds all available ICU locales with a "-x-icu"
appended.
Currently, ICU-provided collations can only be explicitly named
collations. The global database locales are still always libc-provided.
ICU support is enabled by configure --with-icu.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
We had some AC_CHECK_HEADER tests that were really wastes of cycles,
because the code proceeded to #include those headers unconditionally
anyway, in all or a large majority of cases. The lack of complaints
shows that those headers are available on every platform of interest,
so we might as well let configure run a bit faster by not probing
those headers at all.
I suspect that some of the tests I left alone are equally useless, but
since all the existing #includes of the remaining headers are properly
guarded, I didn't touch them.
Commit 04aad4018 added this check after the search for a Python shared
library, which seems to me to be a pretty unfriendly ordering. The
search might fail for what are basically version-related reasons, and
in such a case it'd be better to say "your Python is too old" than
"could not find shared library for Python".
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden. The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.
In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
On buildfarm member cockatiel, that library is in /usr/bin.
(Possibly we should look at $PATH on this platform?)
Per off-list report from Andrew Dunstan.
Per discussion with Andrew Dunstan, it seems there are three peculiarities
of the Python installation on MinGW (or at least, of the instance on
buildfarm member frogmouth). First, the library name doesn't contain
"2.7" but just "27". It looks like we can deal with that by consulting
get_config_vars('VERSION') to see whether a dot should be used or not.
Second, the library might be in c:/Windows/System32, analogously to it
possibly being in /usr/lib on Unix-oid platforms. Third, it's apparently
not standard to use the prefix "lib" on the file name. This patch will
accept files with or without "lib", but the first part of that may well
be dead code.
Most Unix-oid platforms provide a symlink "libfoo.so" -> "libfoo.so.n.n"
to allow the linker to resolve a reference "-lfoo" to a version-numbered
shared library. OpenBSD has apparently hacked ld(1) to do this internally,
because there are no such symlinks to be found in their library
directories. Tweak the new code in PGAC_CHECK_PYTHON_EMBED_SETUP to cope.
Per buildfarm member curculio.
Older versions of Python produce garbage (or at least useless) values of
get_config_vars('LDLIBRARY'). Newer versions produce garbage (or at least
useless) values of get_config_vars('SO'), which was defeating our configure
logic that attempted to identify where the Python shlib really is. The net
result, at least with a stock Python 3.5 installation on macOS, was that
we were linking against a static library in the mistaken belief that it was
a shared library. This managed to work, if you count statically absorbing
libpython into plpython.so as working. But it no longer works as of commit
d51924be8, because now we get separate static copies of libpython in
plpython.so and hstore_plpython.so, and those can't interoperate on the
same data. There are some other infelicities like assuming that nobody
ever installs a private version of Python on a macOS machine.
Hence, forget about looking in $python_configdir for the Python shlib;
as far as I can tell no version of Python has ever put one there, and
certainly no currently-supported version does. Also, rather than relying
on get_config_vars('SO'), just try all the possibilities for shlib
extensions. Also, rather than trusting Py_ENABLE_SHARED, believe we've
found a shlib only if it has a recognized extension. Last, explicitly
cope with the possibility that the shlib is really in /usr/lib and
$python_libdir is a red herring --- this is the actual situation on older
macOS, but we were only accidentally working with it.
Discussion: <5300.1475592228@sss.pgh.pa.us>
Using exit() requires stdlib.h, which is not included. Use return
instead. Also add return type for main().
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
We weren't terribly consistent about whether to call Apple's OS "OS X"
or "Mac OS X", and the former is probably confusing to people who aren't
Apple users. Now that Apple has rebranded it "macOS", follow their lead
to establish a consistent naming pattern. Also, avoid the use of the
ancient project name "Darwin", except as the port code name which does not
seem desirable to change. (In short, this patch touches documentation and
comments, but no actual code.)
I didn't touch contrib/start-scripts/osx/, either. I suspect those are
obsolete and due for a rewrite, anyway.
I dithered about whether to apply this edit to old release notes, but
those were responsible for quite a lot of the inconsistencies, so I ended
up changing them too. Anyway, Apple's being ahistorical about this,
so why shouldn't we be?
awk's equality-comparison operator is "==" not "=". We got this right
in many places, but not in configure's checks for supported version
numbers of flex and perl. It hadn't been noticed because unsupported
versions are so old as to be basically extinct in the wild, and because
the only consequence is whether or not a WARNING flies by during
configure.
Daniel Gustafsson noted the problem with respect to the test for flex,
I found the other by reviewing other awk calls.
Previously, we included <xlocale.h> only if necessary to get the definition
of type locale_t. According to notes in PGAC_TYPE_LOCALE_T, this is
important because on some versions of glibc that file supplies an
incompatible declaration of locale_t. (This info may be obsolete, because
on my RHEL6 box that seems to be the *only* definition of locale_t; but
there may still be glibc's in the wild for which it's a live concern.)
It turns out though that on FreeBSD and maybe other BSDen, you can get
locale_t from stdlib.h or locale.h but mbstowcs_l() and friends only from
<xlocale.h>. This was leaving us compiling calls to mbstowcs_l() and
friends with no visible prototype, which causes a warning and could
possibly cause actual trouble, since it's not declared to return int.
Hence, adjust the configure checks so that we'll include <xlocale.h>
either if it's necessary to get type locale_t or if it's necessary to
get a declaration of mbstowcs_l().
Report and patch by Aleksander Alekseev, somewhat whacked around by me.
Back-patch to all supported branches, since we have been using
mbstowcs_l() since 9.1.
Commit e2609323e set the minimum Tcl version we support to 8.4, but
I forgot to adjust the documentation to say the same. Some nosing
around for other consequences found that the configure script could
be simplified slightly as well.
Per buildfarm member anchovy, 2.6.0 exists in the wild now.
Hopefully it works with Postgres; if not, we'll have to do something
about that, but in any case claiming it's "too old" is pretty silly.
This is like BSWAP32, but for 64-bit values. Since we've got two of
them now and they have use cases (like sortsupport) beyond CRCs, move
the definitions to their own header file.
Peter Geoghegan
Remove configure's checks for HAVE_POSIX_SIGNALS, HAVE_SIGPROCMASK, and
HAVE_SIGSETJMP. These APIs are required by the Single Unix Spec v2
(POSIX 1997), which we generally consider to define our minimum required
set of Unix APIs. Moreover, no buildfarm member has reported not having
them since 2012 or before, which means that even if the code is still live
somewhere, it's untested --- and we've made plenty of signal-handling
changes of late. So just take these APIs as given and save the cycles for
configure probes for them.
However, we can't remove as much C code as I'd hoped, because the Windows
port evidently still uses the non-POSIX code paths for signal masking.
Since we're largely emulating these BSD-style APIs for Windows anyway, it
might be a good thing to switch over to POSIX-like notation and thereby
remove a few more #ifdefs. But I'm not in a position to code or test that.
In the meantime, we can at least make things a bit more transparent by
testing for WIN32 explicitly in these places.
With optimizations enabled at least one compiler, clang 3.7, optimized
away the crc intrinsics knowing that the result went on unused and has
no side effects. That can trigger errors in code generation when the
intrinsic is used, as we chose to use the intrinsics without any
additional compiler flag. Return the computed value to prevent that.
With some more pedantic warning flags (-Wold-style-definition) the
configure test failed to recognize the existence of _mm_crc32_u*
intrinsics due to an independent warning in the test because the test
turned on -Werror, but that's not actually needed here.
Discussion: 20150814092039.GH4955@awork2.anarazel.de
Backpatch: 9.5, where the use of crc intrinsics was integrated.
So far we have worked around the fact that some very old compilers do
not support 'inline' functions by only using inline functions
conditionally (or not at all). Since such compilers are very rare by
now, we have decided to rely on inline functions from 9.6 onwards.
To avoid breaking these old compilers inline is defined away when not
supported. That'll cause "function x defined but not used" type of
warnings, but since nobody develops on such compilers anymore that's
ok.
This change in policy will allow us to more easily employ inline
functions.
I chose to remove code previously conditional on PG_USE_INLINE as it
seemed confusing to have code dependent on a define that's always
defined.
Blacklisting of compilers, like in c53f73879f, now has to be done
differently. A platform template can define PG_FORCE_DISABLE_INLINE to
force inline to be defined empty.
Discussion: 20150701161447.GB30708@awork2.anarazel.de
The current version is adding a spurious -pthread option on some Darwin
systems that don't need it, which leads to a bunch of "unrecognized option
'-pthread'" warnings. There is a proposed fix for that in the upstream
autoconf archive's bug tracker, see https://savannah.gnu.org/patch/?8186.
This commit updates our version of ax_pthread.m4 to the "draft2" version
proposed there by Daniel Richard G. I'm using our buildfarm to help Daniel
to test this, before he commits this to the upstream repository.
Per a suggestion from Tom Lane. Back-patch to 9.0 (all supported
versions). While only 9.4 and up have code known to elicit this
compiler bug, we were disabling inlining by accident until commit
43d89a23d5.
Our version was different from the upstream version in that we tried to use
all possible pthread-related flags that the compiler accepts, rather than
just the first one that works. That change was made in commit
e48322a6d6, to work-around a bug affecting GCC
versions 3.2 and below (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=8888),
although we didn't realize that it was a GCC bug at the time. We hardly care
about that old GCC versions anymore, so we no longer need that workaround.
This fixes the macro for compilers that print warnings with the chosen
flags. That's pretty annoying on its own right, but it also inconspicuously
disabled thread-safety, because we refused to use any pthread-related flags
if the compiler produced warnings. Max Filippov reported that problem when
linking with uClibc and OpenSSL. The warnings-check was added because the
workaround for the GCC bug caused warnings otherwise, so it's no longer
needed either. We can just use the upstream version as is.
If you really want to compile with GCC version 3.2 or older, you can still
work-around it manually by setting PTHREAD_CFLAGS="-pthread -lpthread"
manually on the configure command line.
Backpatch to 9.5. I don't want to unnecessarily rock the boat on stable
branches, but 9.5 seems like fair game.
AC_TRY_COMPILE(...) -> AC_COMPILE_IFELSE([AC_LANG_PROGRAM(...)])
AC_TRY_LINK(...) -> AC_LINK_IFELSE([AC_LANG_PROGRAM(...)])
AC_TRY_RUN(...) -> AC_RUN_IFELSE([AC_LANG_PROGRAM(...)])
AC_LANG_SAVE/RESTORE -> AC_LANG_PUSH/POP
AC_DECL_SYS_SIGLIST -> AC_CHECK_DECLS(...) (per snippet in autoconf manual)
Also use AC_LANG_SOURCE instead of AC_LANG_PROGRAM, where the main()
function is not needed.
With these changes, autoconf -Wall doesn't complain anymore.
Andreas Karlsson
According to recent tests, this case now works fine, so there's no reason
to reject it anymore. (Even if there are still some OpenBSD platforms
in the wild where it doesn't work, removing the check won't break any case
that worked before.)
We can actually remove the entire test that discovers whether libpython
is threaded, since without the OpenBSD case there's no need to know that
at all.
Per report from Davin Potts. Back-patch to all active branches.