* %Make row-wise comparisons work per SQL spec
Right now, '(a, b) < (1, 2)' is processed as 'a < 1 and b < 2', but
the SQL standard requires it to be processed as a column-by-column
comparison, so the proper comparison is '(a < 1) OR (a = 1 AND b < 2)'.
< * Allow star join optimizations
<
< While our bitmap scan allows multiple indexes to be joined to get
< to heap rows, a star joins allows multiple dimension _tables_ to
< be joined to index into a larger main fact table. The join is
< usually performed by either creating a cartesian product of all
< the dimmension tables and doing a single join on that product or
< using subselects to create bitmaps of each dimmension table match
< and merge the bitmaps to perform the join on the fact table. Some
< of these algorithms might be patented.
< * Flush cached query plans when the dependent objects change or
< when the cardinality of parameters changes dramatically
> * Flush cached query plans when the dependent objects change,
> when the cardinality of parameters changes dramatically, or
> when new ANALYZE statistics are available
Drake:
< and merge the bitmaps to perform the join on the fact table.
> and merge the bitmaps to perform the join on the fact table. Some
> of these algorithms might be patented.
* Allow star join optimizations
While our bitmap scan allows multiple indexes to be joined to get
to heap rows, a star joins allows multiple dimension _tables_ to
be joined to index into a larger main fact table. The join is
usually performed by either creating a cartesian product of all
the dimmension tables and doing a single join on that product or
using subselects to create bitmaps of each dimmension table match
and merge the bitmaps to perform the join on the fact table.
< * Flush cached query plans when the dependent objects change
> * Flush cached query plans when the dependent objects change or
> when the cardinality of parameters changes dramatically
< * %Allow pooled connections to list all prepared queries
> * %Allow pooled connections to list all prepared statements
28c28
< the queries prepared in the current session.
> the statements prepared in the current session.
143c143
< o Allow a warm standby system to also allow read-only queries
> o Allow a warm standby system to also allow read-only statements
404c404
< * Add GUC to issue notice about queries that use unjoined tables
> * Add GUC to issue notice about statements that use unjoined tables
490c490
< Another idea would be to allow actual SELECT queries in a COPY.
> Another idea would be to allow actual SELECT statements in a COPY.
554c554
< o Allow function argument names to be queries from PL/PgSQL
> o Allow function argument names to be statements from PL/PgSQL
591c591
< o Improve psql's handling of multi-line queries
> o Improve psql's handling of multi-line statements
< Currently, while \e saves a single query as one entry, interactive
< queries are saved one line at a time. Ideally all queries
> Currently, while \e saves a single statement as one entry, interactive
> statements are saved one line at a time. Ideally all statements
665c665
< o Allow query results to be automatically batched to the client
> o Allow statement results to be automatically batched to the client
667c667
< Currently, all query results are transfered to the libpq
> Currently, all statement results are transfered to the libpq
672c672
< One complexity is that a query like SELECT 1/col could error
> One complexity is that a statement like SELECT 1/col could error
739c739
< * Allow queries across databases or servers with transaction
> * Allow statements across databases or servers with transaction
< inheritance, allow it to work for UPDATE and DELETE queries, and allow
< it to be used for all queries with little performance impact
> inheritance, allow it to work for UPDATE and DELETE statements, and allow
> it to be used for all statements with little performance impact
876c876
< * Consider automatic caching of queries at various levels:
> * Consider automatic caching of statements at various levels:
947c947
< a single session using multiple threads to execute a query faster.
> a single session using multiple threads to execute a statement faster.
1025c1025
< * Log queries where the optimizer row estimates were dramatically
> * Log statements where the optimizer row estimates were dramatically
1146c1146
< of result sets using new query protocol
> of result sets using new statement protocol
< Win32 API, and we have to make sure MinGW handles it.
> Win32 API, and we have to make sure MinGW handles it. Another
> option is to wait for the MinGW project to fix it, or use the
> code from the LibGW32C project as a guide.
> o Add long file support for binary pg_dump output
>
> While Win32 supports 64-bit files, the MinGW API does not,
> meaning we have to build an fseeko replacement on top of the
> Win32 API, and we have to make sure MinGW handles it.
< be cleared when a heap tuple is expired. Another idea is to maintain
< a bitmap of heap pages where all rows are visible to all backends,
< and allow index lookups to reference that bitmap to avoid heap
< lookups, perhaps the same bitmap we might add someday to determine
< which heap pages need vacuuming.
> be cleared when a heap tuple is expired.
>
> Another idea is to maintain a bitmap of heap pages where all rows
> are visible to all backends, and allow index lookups to reference
> that bitmap to avoid heap lookups, perhaps the same bitmap we might
> add someday to determine which heap pages need vacuuming. Frequently
> accessed bitmaps would have to be stored in shared memory. One 8k
> page of bitmaps could track 512MB of heap pages.
< the heap. One way to allow this is to set a bit to index tuples
> the heap. One way to allow this is to set a bit on index tuples
< be cleared when a heap tuple is expired.
<
> be cleared when a heap tuple is expired. Another idea is to maintain
> a bitmap of heap pages where all rows are visible to all backends,
> and allow index lookups to reference that bitmap to avoid heap
> lookups, perhaps the same bitmap we might add someday to determine
> which heap pages need vacuuming.
< * Add MERGE command that does UPDATE/DELETE, or on failure, INSERT (rules,
< triggers?)
> * Add SQL-standard MERGE command, typically used to merge two tables
>
> This is similar to UPDATE, then for unmatched rows, INSERT.
> Whether concurrent access allows modifications which could cause
> row loss is implementation independent.
>
> * Add REPLACE or UPSERT command that does UPDATE, or on failure, INSERT
< #A hyphen, "-", marks changes that will appear in the upcoming 8.1 release.#
> #A hyphen, "-", marks changes that will appear in the upcoming 8.2 release.#
< so duplicate checking can be easily performed.
> so duplicate checking can be easily performed. It is possible to
> do it without a unique index if we require the user to LOCK the table
> before the MERGE.
< * Add a libpq function to support Parse/DescribeStatement capability
< * Add PQescapeIdentifier() to libpq
< * Prevent PQfnumber() from lowercasing unquoted the column name
<
< PQfnumber() should never have been doing lowercasing, but historically
< it has so we need a way to prevent it
<
648a642,661
>
>
> libpq
>
> o Add a function to support Parse/DescribeStatement capability
> o Add PQescapeIdentifier()
> o Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but
> historically it has so we need a way to prevent it
>
> o Allow query results to be automatically batched to the client
>
> Currently, all query results are transfered to the libpq
> client before libpq makes the results available to the
> application. This feature would allow the application to make
> use of the first result rows while the rest are transfered, or
> held on the server waiting for them to be requested by libpq.
> One complexity is that a query like SELECT 1/col could error
> out mid-way through the result set.
< o Add a GUC variable to allow output of interval values in ISO8601
< format
212a211,223
> o Add a GUC variable to allow output of interval values in ISO8601
> format
> o Improve timestamptz subtraction to be DST-aware
>
> Currently, subtracting one date from another that crosses a
> daylight savings time adjustment can return '1 day 1 hour', but
> adding that back to the first date returns a time one hour in
> the future. This is caused by the adjustment of '25 hours' to
> '1 day 1 hour', and '1 day' is the same time the next day, even
> if daylight savings adjustments are involved.
>
> o Fix interval display to support values exceeding 2^31 hours
> o Add overflow checking to timestamp and interval arithmetic
>
> o Add auto-expanded mode so expanded output is used if the row
> length is wider than the screen width.
>
> Consider using auto-expanded mode for backslash commands like \df+.
> * Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but historically
> it has so we need a way to prevent it
>
< * Prevent libpq's PQfnumber() from lowercasing the column name
<
< One idea is to lowercase all identifiers except those that are
< surrounded by quotes.
<