Error messages informing the user that no such column exists can
sometimes provoke a perplexed response. This often happens due to
a subtle typo in the column name or, perhaps less likely, in the
alias name. To speed discovery of what the real issue is in such
cases, we'll now search the range table for approximate matches.
If there are one or two such matches that are good enough to think
that they might be what the user intended to type, and better than
all other approximate matches, we'll issue a hint suggesting that
the user might have intended to reference those columns.
Peter Geoghegan and Robert Haas
Until now __attribute__() was defined to be empty for all compilers but
gcc. That's problematic because it prevents using it in other compilers;
which is necessary e.g. for atomics portability. It's also just
generally dubious to do so in a header as widely included as c.h.
Instead add pg_attribute_format_arg, pg_attribute_printf,
pg_attribute_noreturn macros which are implemented in the compilers that
understand them. Also add pg_attribute_noreturn and pg_attribute_packed,
but don't provide fallbacks, since they can affect functionality.
This means that external code that, possibly unwittingly, relied on
__attribute__ defined to be empty on !gcc compilers may now run into
warnings or errors on those compilers. But there shouldn't be many
occurances of that and it's hard to work around...
Discussion: 54B58BA3.8040302@ohmu.fi
Author: Oskari Saarenmaa, with some minor changes by me.
When newly-added GUC parameter, wal_compression, is on, the PostgreSQL server
compresses a full page image written to WAL when full_page_writes is on or
during a base backup. A compressed page image will be decompressed during WAL
replay. Turning this parameter on can reduce the WAL volume without increasing
the risk of unrecoverable data corruption, but at the cost of some extra CPU
spent on the compression during WAL logging and on the decompression during
WAL replay.
This commit changes the WAL format (so bumping WAL version number) so that
the one-byte flag indicating whether a full page image is compressed or not is
included in its header information. This means that the commit increases the
WAL volume one-byte per a full page image even if WAL compression is not used
at all. We can save that one-byte by borrowing one-bit from the existing field
like hole_offset in the header and using it as the flag, for example. But which
would reduce the code readability and the extensibility of the feature.
Per discussion, it's not worth paying those prices to save only one-byte, so we
decided to add the one-byte flag to the header.
This commit doesn't introduce any new compression algorithm like lz4.
Currently a full page image is compressed using the existing PGLZ algorithm.
Per discussion, we decided to use it at least in the first version of the
feature because there were no performance reports showing that its compression
ratio is unacceptably lower than that of other algorithm. Of course,
in the future, it's worth considering the support of other compression
algorithm for the better compression.
Rahila Syed and Michael Paquier, reviewed in various versions by myself,
Andres Freund, Robert Haas, Abhijit Menon-Sen and many others.
Commit 865f14a2d31af23a05bbf2df04c274629c5d5c4d was quite a few bricks
shy of a load: psql, ecpg, and plpgsql were all left out-of-step with
the core lexer. Of these only the last was likely to be a fatal
problem; but still, a minimal amount of grepping, or even just reading
the comments adjacent to the places that were changed, would have found
the other places that needed to be changed.
The introduction in the Shared Library Preloading section already
instructs the user to separate multiple library names with commas, so
just remove the fragment from here.
Author: Dagfinn Ilmari Mannsåker
... which is the usual convention among AMs, so that pg_filedump and
similar utilities can tell apart pages of different AMs. It was also
the intent of the original code, but I failed to realize that alignment
considerations would move the whole thing to the previous-to-last word
in the page.
The new definition of the associated macro makes surrounding code a bit
leaner, too.
Per note from Heikki at
http://www.postgresql.org/message-id/546A16EF.9070005@vmware.com
SQL has standardized on => as the use of to specify named parameters,
and we've wanted for many years to support the same syntax ourselves,
but this has been complicated by the possible use of => as an operator
name. In PostgreSQL 9.0, we began emitting a warning when an operator
named => was defined, and in PostgreSQL 9.2, we stopped shipping a
=>(text, text) operator as part of hstore. By the time the next major
version of PostgreSQL is released, => will have been deprecated for a
full five years, so hopefully there won't be too many people still
relying on it. We continue to support := for compatibility with
previous PostgreSQL releases.
Pavel Stehule, reviewed by Petr Jelinek, with a few documentation
tweaks by me.
We allow this module to be turned off on restarts, so a restart time
check is enough to activate or deactivate the module; however, if there
is a standby replaying WAL emitted from a master which is restarted, but
the standby isn't, the state in the standby becomes inconsistent and can
easily be crashed.
Fix by activating and deactivating the module during WAL replay on
parameter change as well as on system start.
Problem reported by Fujii Masao in
http://www.postgresql.org/message-id/CAHGQGwFhJ3CnHo1CELEfay18yg_RA-XZT-7D8NuWUoYSZ90r4Q@mail.gmail.com
Author: Petr Jelínek
ALTER DEFAULT PRIVILEGES was trying to decode the list of roles in the
FOR clause as a list of names rather than of RoleSpecs; and the IN
clause in CREATE ROLE was doing the same thing. This was evidenced by
crashes on some buildfarm machines, though on my platform this doesn't
cause a failure by mere chance; I can reproduce the failures only by
adding some padding in struct RoleSpecs.
Fix by dereferencing those lists as being of RoleSpecs, not string
Values.
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
user specifiers in place of an explicit user name.
This commit also fixes some quite ugly handling of special standards-
mandated syntax in CREATE USER MAPPING, which in particular would fail
to work in presence of a role named "current_user".
The special role specifiers PUBLIC and NONE also have more consistent
handling now.
Also take the opportunity to add location tracking to user specifiers.
Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera.
Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
Commit 5cefbf5a6c4466ac6b1cc2a4316b4eba9108c802 introduced an
assumption that this field would always be non-NULL when doing a merge
pass, but that's not true. Without this fix, you can crash the server
by building a hash index that is sufficiently large relative to
maintenance_work_mem, or by triggering a large datum sort.
Commit 5ea86e6e65dd2da3e9a3464484985d48328e7fe3 changed the comments
for that field to say that it would be set in all cases except for the
hash index case, but that wasn't (and still isn't) true.
The datum-sort failure was spotted by Tomas Vondra; initial analysis
of that failure was by Peter Geoghegan. The remaining issues were
spotted by me during review of the surrounding code, and the patch is
all my fault.
This makes it easier to write frontend programs that needs to understand
the WAL record format of CREATE/DROP DATABASE. dbcommands.h cannot easily
be #included in a frontend program, because it pulls in other header files
that need backend stuff, but the new dbcommands_xlog.h header file has
fewer dependencies.
This is a possibly-vain effort to silence a Coverity warning about
bogus endianness dependency. The code's fine, because it takes care
of endianness issues for itself, but Coverity sees an int64 being
passed to an int* argument and not unreasonably suspects something's
wrong. I'm not sure if putting the void* cast in the way will shut it
up; but it can't hurt and seems better from a documentation standpoint
anyway, since the pointer is not used as an int* in this code path.
Just for a bit of additional safety, verify that the result length
is 8 bytes as expected.
Back-patch to 9.3 where the code in question was added.
This struct is purely a client-side artifact. Perhaps there was once
reason for the server to know it, but any such reason is lost in the
mists of time. We certainly don't need two independent declarations
of it.
The SGML docs claimed that 1-byte integers could be sent or received with
the "isint" options, but no such behavior has ever been implemented in
pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was
even less in tune with reality, and the code itself used parameter names
matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit
of additional wordsmithing on the SGML docs while at it.
Since the business about 1-byte integers is a clear documentation bug,
back-patch to all supported branches.
By building it unconditionally, libpgport inadvertently replaced any
libc version of the function. This is essentially a code cleanup; any
effect on performance is almost surely too small to notice.
This role attribute is an ancient PostgreSQL feature, but could only be
set by directly updating the system catalogs, and it doesn't have any
clearly defined use.
Author: Adam Brightwell <adam.brightwell@crunchydatasolutions.com>
Commit 7b583b20b1c95acb621c71251150beef958bb603 created an unnecessary
dump failure hazard by applying pg_get_function_identity_arguments()
to every function in the database, even those that won't get dumped.
This could result in snapshot-related problems if concurrent sessions are,
for example, creating and dropping temporary functions, as noted by Marko
Tiikkaja in bug #12832. While this is by no means pg_dump's only such
issue with concurrent DDL, it's unfortunate that we added a new failure
mode for cases that used to work, and even more so that the failure was
created for basically cosmetic reasons (ie, to sort overloaded functions
more deterministically).
To fix, revert that patch and instead sort function arguments using
information that pg_dump has available anyway, namely the names of the
argument types. This will produce a slightly different sort ordering for
overloaded functions than the previous coding; but applying strcmp
directly to the output of pg_get_function_identity_arguments really was
a bit odd anyway. The sorting will still be name-based and hence
independent of possibly-installation-specific OID assignments. A small
additional benefit is that sorting now works regardless of server version.
Back-patch to 9.3, where the previous commit appeared.
We were using "user mapping for user XYZ" as description for user mappings, but
that's ambiguous because users can have mappings on multiple foreign
servers; therefore change it to "for user XYZ on server UVW" instead.
Object identities for user mappings are also updated in the same way, in
branches 9.3 and above.
The incomplete description string was introduced together with the whole
SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all
the way back.
An OID return value was being used only for a (rather pointless) assert.
Silence by removing the variable and the assert.
Per note from Peter Geoghegan
I had thought that there was no need to maintain separate cache entries
for different source typmods, but further experimentation shows that there
is an advantage to doing so in some cases. In particular, if a domain has
a typmod (say, "CREATE DOMAIN d AS numeric(20,0)"), failing to notice the
source typmod leads to applying a length-coercion step even when the
source has the correct typmod.
This is because can_coerce_type thinks that RECORD can be cast to any
composite type, but coerce_record_to_complex only works for inputs that are
RowExprs or whole-row Vars, so we get a hard failure on a CaseTestExpr.
Perhaps these corner cases ought to be fixed so that coerce_to_target_type
actually returns NULL as per its specification, rather than failing ...
but for the moment an extra check here is the path of least resistance.
plpgsql's historical method for converting datatypes during assignments was
to apply the source type's output function and then the destination type's
input function. Aside from being miserably inefficient in most cases, this
method failed outright in many cases where a user might expect it to work;
an example is that "declare x int; ... x := 3.9;" would fail, not round the
value to 4.
Instead, let's convert by applying the appropriate assignment cast whenever
there is one. To avoid breaking compatibility unnecessarily, fall back to
the I/O conversion method if there is no assignment cast.
So far as I can tell, there is just one case where this method produces a
different result than the old code in a case where the old code would not
have thrown an error. That is assignment of a boolean value to a string
variable (type text, varchar, or bpchar); the old way gave boolean's output
representation, ie 't'/'f', while the new way follows the behavior of the
bool-to-text cast and so gives 'true' or 'false'. This will need to be
called out as an incompatibility in the 9.5 release notes.
Aside from handling many conversion cases more sanely, this method is
often significantly faster than the old way. In part that's because
of more effective caching of the conversion info.
genericcostestimate() and friends used the cost of the entire indexqual
expressions as the charge for initial evaluation of indexscan arguments.
But of course the index column is not evaluated, only the other side
of the qual expression, so this was a bad overestimate if the index
column was an expensive expression.
To fix, refactor the logic in this area so that there's a single routine
charged with deconstructing index quals and figuring out what is the index
column and what is the comparison expression. This is more or less free in
the case of btree indexes, since btcostestimate() was doing equivalent
deconstruction already. It probably adds a bit of new overhead in the cases
of other index types, but not a lot. (In the case of GIN I think I saved
something by getting rid of code that wasn't aware that the index column
associations were already available "for free".)
Per recent gripe from Jeff Janes.
Arguably this is a bug fix, but I'm hesitant to back-patch because of the
possibility of destabilizing plan choices that people may be happy with.
This code relied on pointer equality to identify which restriction clauses
also appear in the indexquals (and, therefore, don't need to be applied as
simple filter conditions). That was okay once upon a time, years ago,
before we introduced the equivalence-class machinery. Now there's about a
50-50 chance that an equality clause appearing in the indexquals will be
the mirror image (commutator) of its mate in the restriction list. When
that happens, we'd erroneously think that the clause would be re-evaluated
at each visited row, and therefore inflate the cost estimate for the
indexscan by the clause's cost.
Add some logic to catch this case. It seems to me that it continues not to
be worthwhile to expend the extra predicate-proof work that createplan.c
will do on the finally-selected plan, but this case is common enough and
cheap enough to handle that we should do so.
This will make a small difference (about one cpu_operator_cost per row)
in simple cases; but in situations where there's an expensive function in
the indexquals, it can make a very large difference, as seen in recent
example from Jeff Janes.
This is a long-standing bug, but I'm hesitant to back-patch because of the
possibility of destabilizing plan choices that people may be happy with.
Passing a NULL pstate wouldn't actually work, because isLockedRefname()
isn't prepared to cope with it; and there hasn't been any in-core code
that tries in over a decade. So just remove the residual NULL handling.
Spotted by Coverity; analysis and patch by Michael Paquier.
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
There's no reason to make users write an explicit cast to store a
json value in a jsonb column or vice versa.
We could probably even make these implicit, but that might open us up
to problems with ambiguous function calls, so for now just do this.
My commit 878fdcb843e087cc1cdeadc987d6ef55202ddd04 was not quite
right. Tom Lane pointed out one of the mistakes fixed here, and I
noticed the other myself while reviewing what I'd committed.
Previously, you could do \set variable operand1 operator operand2, but
nothing more complicated. Now, you can \set variable expression, which
makes it much simpler to do multi-step calculations here. This also
adds support for the modulo operator (%), with the same semantics as in
C.
Robert Haas and Fabien Coelho, reviewed by Álvaro Herrera and
Stephen Frost
Since 9.1, we've provided extensions with a way to denote
"configuration" tables- tables created by an extension which the user
may modify. By marking these as "configuration" tables, the extension
is asking for the data in these tables to be pg_dump'd (tables which
are not marked in this way are assumed to be entirely handled during
CREATE EXTENSION and are not included at all in a pg_dump).
Unfortunately, pg_dump neglected to consider foreign key relationships
between extension configuration tables and therefore could end up
trying to reload the data in an order which would cause FK violations.
This patch teaches pg_dump about these dependencies, so that the data
dumped out is done so in the best order possible. Note that there's no
way to handle circular dependencies, but those have yet to be seen in
the wild.
The release notes for this should include a caution to users that
existing pg_dump-based backups may be invalid due to this issue. The
data is all there, but restoring from it will require extracting the
data for the configuration tables and then loading them in the correct
order by hand.
Discussed initially back in bug #6738, more recently brought up by
Gilles Darold, who provided an initial patch which was further reworked
by Michael Paquier. Further modifications and documentation updates
by me.
Back-patch to 9.1 where we added the concept of extension configuration
tables.