section into PL/pgSQL and non-PL/pgSQL sections:
< o Fix PL/pgSQL RENAME to work on variables other than OLD/NEW
< o Allow function parameters to be passed by name,
< get_employee_salary(emp_id => 12345, tax_year => 2001)
< o Add Oracle-style packages
< o Add table function support to pltcl, plpython
< o Add capability to create and call PROCEDURES
< o Allow PL/pgSQL to handle %TYPE arrays, e.g. tab.col%TYPE[]
< o Allow function argument names to be statements from PL/PgSQL
< o Add MOVE to PL/pgSQL
< o Add support for polymorphic arguments and return types to
< languages other than PL/PgSQL
< o Add support for OUT and INOUT parameters to languages other
< than PL/PgSQL
< o Add single-step debugging of PL/PgSQL functions
< o Allow PL/PgSQL to support WITH HOLD cursors
< o Allow PL/PgSQL RETURN to return row or record functions
<
< http://archives.postgresql.org/pgsql-patches/2005-11/msg00045.php
> o PL/pgSQL
> o Fix RENAME to work on variables other than OLD/NEW
> o Allow function parameters to be passed by name,
> get_employee_salary(emp_id => 12345, tax_year => 2001)
> o Add Oracle-style packages
> o Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]
> o Allow listing of record column names, and access to
> record columns via variables, e.g. columns := r.(*),
> tval2 := r.(colname)
>
> http://archives.postgresql.org/pgsql-patches/2005-07/msg00458.php
> http://archives.postgresql.org/pgsql-patches/2006-05/msg00302.php
> http://archives.postgresql.org/pgsql-patches/2006-06/msg00031.php
>
> o Add MOVE
> o Add single-step debugging of functions
> o Add support for WITH HOLD cursors
> o Allow PL/RETURN to return row or record functions
>
> http://archives.postgresql.org/pgsql-patches/2005-11/msg00045.php
>
>
> o Other
> o Add table function support to pltcl, plpython
> o Add support for polymorphic arguments and return types to
> languages other than PL/PgSQL
> o Add capability to create and call PROCEDURES
> o Add support for OUT and INOUT parameters to languages other
> than PL/PgSQL
remove the infrastructure needed to enforce the limit, ie, the global
LRU list of cache entries. On small-to-middling databases this wins
because maintaining the LRU list is a waste of time. On large databases
this wins because it's better to keep more cache entries (we assume
such users can afford to use some more per-backend memory than was
contemplated in the Berkeley-era catcache design). This provides a
noticeable improvement in the speed of psql \d on a 10000-table
database, though it doesn't make it instantaneous.
While at it, use per-catcache settings for the number of hash buckets
per catcache, rather than the former one-size-fits-all value. It's a
bit silly to be using the same number of hash buckets for, eg, pg_am
and pg_attribute. The specific values I used might need some tuning,
but they seem to be in the right ballpark based on CATCACHE_STATS
results from the standard regression tests.
<
< o Add new version of PQescapeString() that doesn't double backslashes
< that are part of a client-only multibyte sequence
<
< Single-quote is not a valid byte in any supported client-only
< encoding. This requires using mblen() to determine if the
< backslash is inside or outside a multi-byte sequence.
<
< o Add new version of PQescapeString() that doesn't double
< backslashes when standard_conforming_strings is true and
< non-E strings are used
< Right now only one encoding is allowed per database.
> Right now only one encoding is allowed per database. [locale]
> * Add CREATE COLLATE? [locale]
the lower-level large object functions fails, it will have already set
a suitable error message --- probably something from the backend ---
and it is not useful to overwrite that with a generic 'error while
reading large object' message. So remove redundant messages.
places --- that risks corrupting data structures, losing sync with the
backend, etc. We now longjmp only from calls to readline, fgets, and
fread, which we assume are coded to protect themselves against interrupts
at undesirable times. This requires adding explicit tests for
cancel_pressed in long-running loops, but on the whole it's far cleaner.
Martijn van Oosterhout and Tom Lane.
function call. Previously, there may have been no CHECK_FOR_INTERRUPTS
at all in the fastpath code path, making it impossible to cancel an
operation such as \lo_import externally. This addition doesn't ensure
you can cancel, since your SIGINT may arrive while the backend is idle
waiting for the client, but it gives the largest window we can easily
provide. Noted while experimenting with new control-C code for psql.
already-aborted transaction block. GetSnapshotData throws an Assert if
not in a valid transaction; hence we mustn't attempt to set a snapshot
for the function until after checking for aborted transaction. This is
harmless AFAICT if Asserts aren't enabled (GetSnapshotData will compute
a bogus snapshot, but it doesn't matter since HandleFunctionRequest will
throw an error shortly anywy). Hence, not a major bug.
Along the way, add some ability to log fastpath calls when statement
logging is turned on. This could probably stand to be improved further,
but not logging anything is clearly undesirable.
Backpatched as far as 8.0; bug doesn't exist before that.
< pg_get_tabledef(), pg_get_domaindef(), pg_get_functiondef(), and
< make use of them in pg_dump
> pg_get_tabledef(), pg_get_domaindef(), pg_get_functiondef()
< pg_get_tabledef(), pg_get_domaindef(), pg_get_functiondef()
> pg_get_tabledef(), pg_get_domaindef(), pg_get_functiondef(), and
> make use of them in pg_dump
was invoking obj_description() for each large object chunk, instead of once
per large object. This code is new as of 8.1, which may explain why the
problem hadn't been noticed already.