SQLSTATE error codes required by SQL99 (invalid format, datetime field
overflow, interval field overflow, invalid time zone displacement value).
Also emit a HINT about DateStyle in cases where it seems appropriate.
Per recent gripes.
lumping them into ERRCODE_UNDEFINED_OBJECT/ERRCODE_DUPLICATE_OBJECT.
This seems reasonable since 'object' was meant to refer to 'object in the
database' and a file is outside the database. Per request from Dave
Cramer.
now all that is tested is Rod Taylor's recent addition to allow
this syntax:
UPDATE ... SET <col> = DEFAULT;
If anyone else would like to add more UPDATE tests, go ahead --
I just wanted to write a test for the above functionality, and
couldn't see an existing test that it would be appropriate
to add to.
Neil Conway
max_connections at initdb time. Get rid of DEF_NBUFFERS and DEF_MAXBACKENDS
macros, which aren't doing anything useful anymore, and put more likely
defaults into postgresql.conf.sample.
I think this should fix the problem, but since I don't have a reproducable test
case, I can't be sure. This problem is reported by Kim Ho of redhat, who will
test this fix. This also includes a test case for the original functionality.
Modified Files:
jdbc/org/postgresql/jdbc1/AbstractJdbc1Statement.java
jdbc/org/postgresql/test/jdbc2/ResultSetTest.java
perform a timestamp-to-date coercion. Instead both routines share a
subroutine that delivers the parsing result as a struct tm. This avoids
problems with timezone dependency of to_date's result, and should be
at least marginally faster too.
- adds a finalizer method to AbstractJdbc1Statement to clean up in the case of
poor user code which fails to close the statement object
- fix ant build file to correctly detect dependencies across jdbc1/jdbc2/jdbc3
- fix a coupld of server prepared statement bugs and added regression test for
them
Applied patch from Kim Ho:
- adds support for get/setMaxFieldSize().
Also fixed build.xml to provide a better error message in the event that an
older version of the driver exists in the classpath when trying to build.
handling many-way scans: instead of re-evaluating all prior indexscan
quals to see if a tuple has been fetched more than once, use a hash table
indexed by tuple CTID. But fall back to the old way if the hash table
grows to exceed SortMem.
as well as the hash function (formerly the comparison function was hardwired
as memcmp()). This makes it possible to eliminate the special-purpose
hashtable management code in execGrouping.c in favor of using dynahash to
manage tuple hashtables; which is a win because dynahash knows how to expand
a hashtable when the original size estimate was too small, whereas the
special-purpose code was too stupid to do that. (See recent gripe from
Stephan Szabo about poor performance when hash table size estimate is way
off.) Free side benefit: when using string_hash, the default comparison
function is now strncmp() instead of memcmp(). This should eliminate some
part of the overhead associated with larger NAMEDATALEN values.
the trigger is attached to in the hashkey. This ensures that we will
create separate compiled trees for each table the trigger is used with,
avoiding possible datatype-mismatch problems if the tables have different
rowtypes. This is essentially the same bug recently identified in plpython
--- though plpgsql doesn't seem as prone to crash when the rowtype changes
underneath it. But failing robustly is no substitute for just working.