and nail a couple more system indexes into cache. This doesn't make
any difference in normal system operation, but when forcing constant
cache resets it's difficult to get through the rules regression test
without these changes.
access information about the prepared statements that are available
in the current session. Original patch from Joachim Wieland, various
improvements by Neil Conway.
The "statement" column of the view contains the literal query string
sent by the client, without any rewriting or pretty printing. This
means that prepared statements created via SQL will be prefixed with
"PREPARE ... AS ", whereas those prepared via the FE/BE protocol will
not. That is unfortunate, but discussion on -patches did not yield an
efficient way to improve this, and there is some merit in returning
exactly what the client sent to the backend.
Catalog version bumped, regression tests updated.
use it. While it normally has been opened earlier during btree index
build, testing shows that it's possible for the link to be closed again
if an sinval reset occurs while the index is being built.
dead and have become unreferenced. Before 8.1, such members were left
for AtEOXact_CatCache() to clean up, but now AtEOXact_CatCache isn't
supposed to have anything to do. In an assert-enabled build this bug
leads to an assertion failure at transaction end, but in a non-assert
build the dead member is effectively just a small memory leak.
Per report from Jeremy Drake.
rather than elog(FATAL), when there is no more room in ShmemBackendArray.
This is a security issue since too many connection requests arriving close
together could cause the postmaster to shut down, resulting in denial of
service. Reported by Yoshiyuki Asaba, fixed by Magnus Hagander.
the relation but it finds a pre-existing valid buffer. The buffer does not
correspond to any page known to the kernel, so we *must* do smgrextend to
ensure that the space becomes allocated. The 7.x branches all do this
correctly, but the corner case got lost somewhere during 8.0 bufmgr rewrites.
(My fault no doubt :-( ... I think I assumed that such a buffer must be
not-BM_VALID, which is not so.)
an LWLock instead of a spinlock. This hardly matters on Unix machines
but should improve startup performance on Windows (or any port using
EXEC_BACKEND). Per previous discussion.
from Andrus Moor. The former state-machine-style coding wasn't actually
doing much except obscuring the control flow, and it didn't extend
readily to fix this case, so I just took it out. Also, add a
YY_FLUSH_BUFFER call to ensure the lexer is reset correctly if the
previous scan failed partway through the file.
a little bit, and set the minimum buffers-per-connection ratio to 10 not
5. I folded the two test routines into one to counteract the illusion
that the tests can be twiddled independently, and added some documentation
pointing out the necessary connection between the sets of values tested.
Fixes strange choices of parameters that I noticed CVS tip making on
Darwin with Apple's undersized default SHMMAX.
selection of a field from the result of a function returning RECORD.
I believe this case is new in 8.1; it's due to the addition of OUT parameters.
Per example from Michael Fuhr.
in favor of having just one set of macros that don't do HOLD/RESUME_INTERRUPTS
(hence, these correspond to the old SpinLockAcquire_NoHoldoff case).
Given our coding rules for spinlock use, there is no reason to allow
CHECK_FOR_INTERRUPTS to be done while holding a spinlock, and also there
is no situation where ImmediateInterruptOK will be true while holding a
spinlock. Therefore doing HOLD/RESUME_INTERRUPTS while taking/releasing a
spinlock is just a waste of cycles. Qingqing Zhou and Tom Lane.
setup. This protects against undesired changes in locale behavior
if someone carelessly does setlocale(LC_ALL, "") (and we know who
you are, perl guys).
get_func_arg_info() for consistency with other names there.
This code will probably be useful to other PLs when they start to
support OUT parameters, so better to have it in the main backend.
Also, fix plpgsql validator to detect bogus OUT parameters even when
check_function_bodies is off.
(previously we only did = and <> correctly). Also, allow row comparisons
with any operators that are in btree opclasses, not only those with these
specific names. This gets rid of a whole lot of indefensible assumptions
about the behavior of particular operators based on their names ... though
it's still true that IN and NOT IN expand to "= ANY". The patch adds a
RowCompareExpr expression node type, and makes some changes in the
representation of ANY/ALL/ROWCOMPARE SubLinks so that they can share code
with RowCompareExpr.
I have not yet done anything about making RowCompareExpr an indexable
operator, but will look at that soon.
initdb forced due to changes in stored rules.
if (c == '\\' && cstate->line_buf.len == 0)
The problem with that is the because of the input and _output_
buffering, cstate->line_buf.len could be zero even if we are not on the
first character of a line. In fact, for a typical line, it is zero for
all characters on the line. The proper solution is to introduce a
boolean, first_char_in_line, that we set as we enter the loop and clear
once we process a character.
I have restructured the line-reading code in copy.c by:
o merging the CSV/non-CSV functions into a single function
o used macros to centralize and clarify the buffering code
o updated comments
o renamed client_encoding_only to encoding_embeds_ascii
o added a high-bit test to the encoding_embeds_ascii test for
performance
o in CSV mode, allow a backslash followed by a non-period to
continue being processed as a data value
There should be no performance impact from this patch because it is
functionally equivalent. If you apply the patch you will see copy.c is
much clearer in this area now and might suggest additional
optimizations.
I have also attached a 8.1-only patch to fix the CSV \. handling bug
with no code restructuring.
- use "bool" rather than "int" for boolean variables
- use "PLy_malloc" rather than "malloc" in two places
- define "PLy_strdup", and use it rather than malloc() + strcpy() in
two places (which should have been memcpy(), anyway).
- remove a bunch of redundant parentheses from expressions that do not
need the parentheses for code clarity
#define HIGHBIT (0x80)
#define IS_HIGHBIT_SET(ch) ((unsigned char)(ch) & HIGHBIT)
and removed CSIGNBIT and mapped it uses to HIGHBIT. I have also added
uses for IS_HIGHBIT_SET where appropriate. This change is
purely for code clarity.
See:
Subject: [HACKERS] bugs with certain Asian multibyte charsets
From: Tatsuo Ishii <ishii@sraoss.co.jp>
To: pgsql-hackers@postgresql.org
Date: Sat, 24 Dec 2005 18:25:33 +0900 (JST)
for more details/
differ by more than the last directory component. Instead of insisting
that they match up to the last component, accept whatever common prefix
they have, and try to replace the non-matching part of bin_path with
the non-matching part of target_path in the actual executable's path.
In one way this is tighter than the old code, because it insists on
a match to the part of bin_path we want to substitute for, rather than
blindly stripping one directory component from the executable's path.
Per gripe from Martin Pitt and subsequent discussion.
Also make the code more robust by searching for target encoding
in the internal charset map.
Problem reported by Sagi Bashari on 2005/12/21.
See "[BUGS] BUG #2120: Crash when doing UTF8<->ISO_8859_8 encoding conversion"
on pgsql-bugs list for more details.
equal: if strcoll claims two strings are equal, check it with strcmp, and
sort according to strcmp if not identical. This fixes inconsistent
behavior under glibc's hu_HU locale, and probably under some other locales
as well. Also, take advantage of the now-well-defined behavior to speed up
texteq, textne, bpchareq, bpcharne: they may as well just do a bitwise
comparison and not bother with strcoll at all.
NOTE: affected databases may need to REINDEX indexes on text columns to be
sure they are self-consistent.
Per my recent proposal. I ended up basing the implementation on the
existing mechanism for enforcing valid join orders of IN joins --- the
rules for valid outer-join orders are somewhat similar.
file. The original code probed the PGPROC array separately for each PID,
which was not good for large numbers of backends: not only is the runtime
O(N^2) but most of it is spent holding ProcArrayLock. Instead, take the
lock just once and copy the active PIDs into an array, then use qsort
and bsearch so that the lookup time is more like O(N log N).
messages, when client attempts to execute these outside a transaction (start
one) or in a failed transaction (reject message, except for COMMIT/ROLLBACK
statements which we can handle). Per report from Francisco Figueiredo Jr.
reduce contention for the former single LockMgrLock. Per my recent
proposal. I set it up for 16 partitions, but on a pgbench test this
gives only a marginal further improvement over 4 partitions --- we need
to test more scenarios to choose the number of partitions.
that simplify_boolean_equality() may leave behind. This is only relevant
if the user writes something a bit silly, like CASE x=y WHEN TRUE THEN.
Per example from Michael Fuhr; may or may not explain bug #2106.
the data defining the semantics of a lock method (ie, conflict resolution
table and ancillary data, which is all constant) and the hash tables
storing the current state. The only thing we give up by this is the
ability to use separate hashtables for different lock methods, but there
is no need for that anyway. Put some extra fields into the LockMethod
definition structs to clean up some other uglinesses, like hard-wired
tests for DEFAULT_LOCKMETHOD and USER_LOCKMETHOD. This commit doesn't
do anything about the performance issues we were discussing, but it clears
away some of the underbrush that's in the way of fixing that.
> Now, the arguments of the drop function can be tab completed. for example
>
> drop function strpos (
> <press tab>
> drop FUNCTION strpos (text, text)
>
> or:
>
> wsdb=# drop FUNCTION length (
> bit) bytea) character) lseg) path) text)
> <press c>
> wsdb# DROP FUNCTION length ( character)
>
> I think that this patch should be rather useful. At it least I hate
> always to type all the arguments of the dropped functions.
>
> 2) Also some fixes applied for the
> CREATE INDEX syntax
>
> now the parenthesises are inserted by tab pressing.
> suppose I have the table q3c:
Sergey E. Koposov
I have the problem, when building by MS-VC6.
An error occurs in the 8.1.0 present source codes.
nmake -f win32.mak
..\..\port\getaddrinfo.c(244) : error C2065: 'WSA_NOT_ENOUGH_MEMORY'
..\..\port\getaddrinfo.c(342) : error C2065: 'WSATYPE_NOT_FOUND'
This is used by winsock2.h. However, Construction of a windows base is
winsock.h.
Then, Since MinGW has special environment, this is right. but, it is not
found in VC6.
Furthermore, in getaddrinfo.c, IPV6-API is used by
LoadLibraryA("ws2_32");
Referring to of dll the external memory generates this violation by VC6
specification.
I considered whether the whole should have been converted into winsock2.
However, Now, DLL of MinGW creation operates wonderfully as it is.
That's right, it has pliability by replacement of simple DLL.
Then, I propose the system using winsock(non IPV6) in construction of
VC6.
Hiroshi Saito
_bt_checkkeys(), instead of checking it in the top-level nbtree.c routines
as formerly. This saves a little bit of loop overhead, but more importantly
it lets us skip performing the index key comparisons for dead tuples.
checks, which were once needed because PageGetMaxOffsetNumber would
fail on empty pages, but are now just redundant. Also, don't set up
local variables that aren't needed in the fast path --- most of the
time, we only need to advance offnum and not step across a page boundary.
Motivated by noticing _bt_step at the top of OProfile profile for a
pgbench run.
SLRU area. The number of slots is still a compile-time constant (someday
we might want to change that), but at least it's a different constant for
each SLRU area. Increase number of subtrans buffers to 32 based on
experimentation with a heavily subtrans-bashing test case, and increase
number of multixact member buffers to 16, since it's obviously silly for
it not to be at least twice the number of multixact offset buffers.
lock, not exclusive, if the desired page is already in memory. This can
be demonstrated to be a significant win on the pg_subtrans cache when there
is a large window of open transactions. It should be useful for pg_clog
as well. I didn't try to make GetMultiXactIdMembers() use the code, as
that would have taken some restructuring, and what with the local cache
for multixact contents it probably wouldn't really make a difference.
Per my recent proposal.
clauses even if it's an outer join. This is a corner case since such
clauses could only arise from weird OUTER JOIN ON conditions, but worth
fixing. Per example from Ron at cheapcomplexdevices.com.
incorrect implementation of argument reordering, arbitrary limit of output
size for sprintf and fprintf, willingness to access more bytes than "%.Ns"
specification allows, wrong formatting of LONGLONG_MIN, various field-padding
bugs and omissions. I believe it now accurately implements a subset of
the Single Unix Spec requirements (remaining unimplemented features are
documented, too). Bruce Momjian and Tom Lane.
than owned by nobody. This results in cleaner display of language ACLs,
since the backend's aclchk.c uses the same convention. AFAICS there is
no practical difference but it's nice to avoid emitting SET SESSION
AUTHORIZATION; also this will make it easier to transition pg_dump to
some future version in which we may include an explicit ownership column
in pg_language. Per gripe from David Begley.
Map them to a single day, so '30 hours' is 'AM'.
Have to_char(interval) and to_char(time) use "HH", "HH12" as 12-hour
intervals, rather than bypass and print the full interval hours. This
is neeeded because to_char(time) is mapped to interval in this function.
Intervals should use "HH24", and document suggestion.
Allow "D" format specifiers for interval/time.
if we already have a stronger lock due to the index's table being the
update target table of the query. Same optimization I applied earlier
at the table level. There doesn't seem to be much interest in the more
radical idea of not locking indexes at all, so do what we can ...
relation if it's already been locked by execMain.c as either a result
relation or a FOR UPDATE/SHARE relation. This avoids an extra trip to
the shared lock manager state. Per my suggestion yesterday.
child plan nodes until we have acquired lock on the relation to scan.
The relative order of initialization of plan nodes isn't real important in
other cases, but it's critical here because one is supposed to lock a
relation before its indexes, not vice versa. The original coding was at
least vulnerable to deadlock against DROP INDEX, and perhaps worse things.
Also add a retry for Unixen returning EINTR, which hasn't been reported
as an issue but at least theoretically could be. Patch by Qingqing Zhou,
some minor adjustments by me.
#2075: consider an index redundant if any of its index conditions were already
used, rather than if all of them were. Also, make the selectivity comparison
a bit fuzzy, so that very small differences in estimated selectivities don't
skew the results.
it's worth probing the outer relation for emptiness before building the
hash table. To wit, if we're rescanning a join previously performed,
remember whether we found it nonempty the previous time, and don't bother
with the probe if it was nonempty. This buys back the performance lost
in examples like Mario Weilguni's.
one child or the other had a problem: they did not leave the node in a
state that ExecReScanHashJoin would understand. In particular it would
tend to fail to reset the child plans when needed. Per report from
Mario Weilguni.
ScalarArrayOpExpr when possible, that is, whenever there is an array type
for the values of the expression list. This completes the project I've
been working on to improve the speed of index searches with long IN lists,
as per discussion back in mid-October.
I did not force initdb, but until you do one you will see failures in the
"rules" regression test, because some of the standard system views use IN
and their compiled formats have changed.
they were broken-out AND or OR lists. The least grotty way to do this
seemed to be to set up a general mechanism for handling nodes as though
they were ANDs or ORs. There's no other immediate use for it, but perhaps
we might want to use the mechanism someday for things like BETWEEN
SYMMETRIC.
"ctid IN (list)" will still work after we convert IN to ScalarArrayOpExpr.
Make some minor efficiency improvements while at it, such as ensuring that
multiple TIDs are fetched in physical heap order. And fix EXPLAIN so that
it shows what's really going on for a TID scan.
when we first read the page, rather than checking them one at a time.
This allows us to take and release the buffer content lock just once
per page, instead of once per tuple. Since it's a shared lock the
contention penalty for holding the lock longer shouldn't be too bad.
We can safely do this only when using an MVCC snapshot; else the
assumption that visibility won't change over time is uncool. Therefore
there are now two code paths depending on the snapshot type. I also
made the same change in nodeBitmapHeapscan.c, where it can be done always
because we only support MVCC snapshots for bitmap scans anyway.
Also make some incidental cleanups in the APIs of these functions.
Per a suggestion from Qingqing Zhou.
qualification when the underlying operator is indexable and useOr is true.
That is, indexkey op ANY (ARRAY[...]) is effectively translated into an
OR combination of one indexscan for each array element. This only works
for bitmap index scans, of course, since regular indexscans no longer
support OR'ing of scans. There are still some loose ends to clean up
before changing 'x IN (list)' to translate as a ScalarArrayOpExpr;
for instance predtest.c ought to be taught about it. But this gets the
basic functionality in place.
a TupleTableSlot: instead of calling ExecClearTuple, inline the needed
operations, so that we can avoid redundant steps. In particular, when
the old and new tuples are both on the same disk page, avoid releasing
and re-acquiring the buffer pin --- this saves work in both the bufmgr
and ResourceOwner modules. To make this improvement actually useful,
partially revert a change I made on 2004-04-21 that caused SeqNext
et al to call ExecClearTuple before ExecStoreTuple. The motivation
for that, to avoid grabbing the BufMgrLock separately for releasing
the old buffer and grabbing the new one, no longer applies. My
profiling says that this saves about 5% of the CPU time for an
all-in-memory seqscan.
generate their output tuple descriptors from their target lists (ie, using
ExecAssignResultTypeFromTL()). We long ago fixed things so that all node
types have minimally valid tlists, so there's no longer any good reason to
have two different ways of doing it. This change is needed to fix bug
reported by Hayden James: the fix of 2005-11-03 to emit the correct column
names after optimizing away a SubqueryScan node didn't work if the new
top-level plan node used ExecAssignResultTypeFromOuterPlan to generate its
tupdesc, since the next plan node down won't have the correct column labels.
a SubLink expression into a rule query. Pre-8.1 we essentially did this
unconditionally; 8.1 tries to do it only when needed, but was missing a
couple of cases. Per report from Kyle Bateman. Add some regression test
cases covering this area.
comment line where output as too long, and update typedefs for /lib
directory. Also fix case where identifiers were used as variable names
in the backend, but as typedefs in ecpg (favor the backend for
indenting).
Backpatch to 8.1.X.
process of dropping roles by dropping objects owned by them and privileges
granted to them, or giving the owned objects to someone else, through the
use of the data stored in the new pg_shdepend catalog.
Some refactoring of the GRANT/REVOKE code was needed, as well as ALTER OWNER
code. Further cleanup of code duplication in the GRANT code seems necessary.
Implemented by me after an idea from Tom Lane, who also provided various kind
of implementation advice.
Regression tests pass. Some tests for the new functionality are also added,
as well as rudimentary documentation.
tuple in-place, but instead passes back an all-new tuple structure if
any changes are needed. This is a much cleaner and more robust solution
for the bug discovered by Alexey Beschiokov; accordingly, revert the
quick hack I installed yesterday.
With this change, HeapTupleData.t_datamcxt is no longer needed; will
remove it in a separate commit in HEAD only.
doing heap_insert or heap_update, wipe out any extracted fields in
the TupleTableSlot containing the tuple, because they might not be valid
anymore if tuptoaster.c changed the tuple. Safe because slot must be
in the materialized state, but mighty ugly --- find a better answer!
the array (for array_push) or higher-dimensional array (for array_cat)
rather than decrementing it as before. This avoids generating lower
bounds other than one for any array operation within the SQL spec. Per
recent discussion.
Interestingly, this seems to have been the original behavior, because
while updating the docs I noticed that a large fraction of relevant
examples were *wrong* for the old behavior and are now right. Is it
worth correcting this in the back-branch docs?
recursed twice on its first argument, leading to exponential time spent
on a deep nest of COALESCEs ... such as a deeply nested FULL JOIN would
produce. Per report from Matt Carter.
functionality, but I still need to make another pass looking at places
that incidentally use arrays (such as ACL manipulation) to make sure they
are null-safe. Contrib needs work too.
I have not changed the behaviors that are still under discussion about
array comparison and what to do with lower bounds.
that was added to localbuf.c in 8.1; therefore, applying it to a temp table
left corrupt lookup state in memory. The only case where this had a
significant chance of causing problems was an ON COMMIT DELETE ROWS temp
table; the other possible paths left bogus state that was unlikely to
be used again. Per report from Csaba Nagy.
names from being added to pgindent's typedef list. The existance of
them caused weird formatting in the date/type files, and in keywords.c.
Backpatch to 8.1.X.
columns, shifting comment to the right when more than 150 'else if'
clauses were used, and update typedefs for 8.1.X.
NetBSD patched updated, with documentation.
sense and rename to "outerjoin_delayed" to more clearly reflect what it
means). I had decided that it was redundant in 8.1, but the folly of this
is exposed by a bug report from Sebastian Böck. The place where it's
needed is to prevent orindxpath.c from cherry-picking arms of an outer-join
OR clause to form a relation restriction that isn't actually legal to push
down to the relation scan level. There may be some legal cases that this
forbids optimizing, but we'd need much closer analysis to determine it.
slot of the topmost plan node when a trigger returns a modified tuple.
These appear to be the only places where a plan node's caller did not
treat the result slot as read-only, which is an assumption that nodeUnique
makes as of 8.1. Fixes trigger-vs-DISTINCT bug reported by Frank van Vugt.
surprising results when it's some other numeric type. This doesn't solve
the generic problem of surprising implicit casts to text, but it's a
low-impact way of making sure this particular case behaves sanely.
Per gripe from Harald Fuchs and subsequent discussion.
anything but transaction-exiting commands (ROLLBACK etc). We already rejected
Parse and Execute in such cases, so there seems little point in allowing Bind.
This prevents at least an Assert failure, and probably worse things, since
there's a lot of infrastructure that doesn't work when not in a live
transaction. We can also simplify the Bind logic a bit by rejecting messages
with a nonzero number of parameters, instead of the former kluge to silently
substitute NULL for each parameter. Per bug #2033 from Joel Stevenson.
on every index page they read; in particular to catch the case of an
all-zero page, which PageHeaderIsValid allows to pass. It turns out
hash already had this idea, but it was just Assert()ing things rather
than doing a straight error check, and the Asserts were partially
redundant with PageHeaderIsValid anyway. Per recent failure example
from Jim Nasby. (gist still needs the same treatment.)
to assume that the string pointer passed to set_ps_display is good forever.
There's no need to anyway since ps_status.c itself saves the string, and
we already had an API (get_ps_display) to return it.
I believe this explains Jim Nasby's report of intermittent crashes in
elog.c when %i format code is in use in log_line_prefix.
While at it, repair a previously unnoticed problem: on some platforms such as
Darwin, the string returned by get_ps_display was blank-padded to the maximum
length, meaning that lock.c's attempt to append " waiting" to it never worked.
create circularity of role memberships. This is a minimum-impact fix
for the problem reported by Florian Pflug. I thought about removing
the superuser_arg test from is_member_of_role() altogether, as it seems
redundant for many of the callers --- but not all, and it's way too late
in the 8.1 cycle to be making large changes. Perhaps reconsider this
later.