> o Prevent parent tables from altering or dropping constraints
> like CHECK that are inherited by child tables
>
> Dropping constraints should only be possible with CASCADE.
>
< * %Disallow changing sequence characteristics like INCREMENT for SERIAL columns
> * %Disallow ALTER SEQUENCE changes for SERIAL sequences because pg_dump
> does not dump the changes
> * Improve port/qsort() to handle sorts with 50% unique and 50% duplicate
> value [qsort]
>
> This involves choosing better pivot points for the quicksort.
- "Add ON COMMIT capability to CREATE TABLE AS ... SELECT" is done
- "Allow PREPARE to automatically determine parameter types" is done
- "Clean up compiler warnings (especially with gcc version 4)" is done:
AFAIK there are no remaining gcc4 compiler warnings to be fixed.
- Creating rules to do view updates is *not* an easy TODO item
>
> o Allow pg_hba.conf to specify host names along with IP addresses
>
> Host name lookup could occur when the postmaster reads the
> pg_hba.conf file, or when the backend starts. Another
> solution would be to reverse lookup the connection IP and
> check that hostname against the host names in pg_hba.conf.
> We could also then check that the host name maps to the IP
> address.
< * Allow control over which tables are WAL-logged [walcontrol]
> * Allow WAL logging to be turned off for a table, but the table
> might be dropped or truncated during crash recovery [walcontrol]
< commit. To do this, only a single writer can modify the table, and
< writes must happen only on new pages. Readers can continue accessing
< the table. This would affect COPY, and perhaps INSERT/UPDATE too.
< Another option is to avoid transaction logging entirely and truncate
< or drop the table on crash recovery. These should be implemented
< using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP | TRUNCATE |
< STABLE | DEFAULT ]. Tables using non-default logging should not use
< referential integrity with default-logging tables, and tables using
< stable logging probably can not have indexes. One complexity is
< the handling of indexes on TOAST tables.
> commit. This should be implemented using ALTER TABLE, e.g. ALTER
> TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ]. Tables using
> non-default logging should not use referential integrity with
> default-logging tables. A table without dirty buffers during a
> crash could perhaps avoid the drop/truncate.
>
> * Allow WAL logging to be turned off for a table, but the table would
> avoid being truncated/dropped [walcontrol]
>
> To do this, only a single writer can modify the table, and writes
> must happen only on new pages so the new pages can be removed during
> crash recovery. Readers can continue accessing the table. Such
> tables probably cannot have indexes. One complexity is the handling
> of indexes on TOAST tables.
< * Allow control over which tables are WAL-logged
> * Allow control over which tables are WAL-logged [walcontrol]
1038c1038,1039
< stable logging probably can not have indexes. [walcontrol]
> stable logging probably can not have indexes. One complexity is
> the handling of indexes on TOAST tables.
> * Allow statistics collector information to be pulled from the collector
> process directly, rather than requiring the collector to write a
> filesystem file twice a second?
>
> o Prevent tab completion of SET TRANSACTION from querying the
> database and therefore preventing the transaction isolation
> level from being set.
>
> Currently, SET <tab> causes a database lookup to check all
> supported session variables. This query causes problems
> because setting the transaction isolation level must be the
> first statement of a transaction.
< * %Prevent INET cast to CIDR if the unmasked bits are not zero, or
< zero the bits
< * %Prevent INET cast to CIDR from dropping netmask, SELECT '1.1.1.1'::inet::cidr
> * -Zero umasked bits in conversion from INET cast to CIDR
> * -Prevent INET cast to CIDR from dropping netmask, SELECT '1.1.1.1'::inet::cidr
< o Allow an alias to be provided for the target table in
< UPDATE/DELETE
<
< This is not SQL-spec but many DBMSs allow it.
<
> o -Allow an alias to be provided for the target table in
> UPDATE/DELETE (Neil)
< STABLE | DEFAULT ]. [wallog]
> STABLE | DEFAULT ]. Tables using non-default logging should not use
> referential integrity with default-logging tables, and tables using
> stable logging probably can not have indexes. [wallog]
< the table. Another option is to avoid transaction logging entirely
< and truncate or drop the table on crash recovery. These should be
< implemented using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP |
< TRUNCATE | STABLE | DEFAULT ]. [wallog]
> the table. This would affect COPY, and perhaps INSERT/UPDATE too.
> Another option is to avoid transaction logging entirely and truncate
> or drop the table on crash recovery. These should be implemented
> using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP | TRUNCATE |
> STABLE | DEFAULT ]. [wallog]
>
> * Allow control over which tables are WAL-logged
>
> Allow tables to bypass WAL writes and just fsync() dirty pages on
> commit. To do this, only a single writer can modify the table, and
> writes must happen only on new pages. Readers can continue accessing
> the table. Another option is to avoid transaction logging entirely
> and truncate or drop the table on crash recovery. These should be
> implemented using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP |
> TRUNCATE | STABLE | DEFAULT ]. [wallog]
* %Make row-wise comparisons work per SQL spec
Right now, '(a, b) < (1, 2)' is processed as 'a < 1 and b < 2', but
the SQL standard requires it to be processed as a column-by-column
comparison, so the proper comparison is '(a < 1) OR (a = 1 AND b < 2)'.
< * Allow star join optimizations
<
< While our bitmap scan allows multiple indexes to be joined to get
< to heap rows, a star joins allows multiple dimension _tables_ to
< be joined to index into a larger main fact table. The join is
< usually performed by either creating a cartesian product of all
< the dimmension tables and doing a single join on that product or
< using subselects to create bitmaps of each dimmension table match
< and merge the bitmaps to perform the join on the fact table. Some
< of these algorithms might be patented.
< * Flush cached query plans when the dependent objects change or
< when the cardinality of parameters changes dramatically
> * Flush cached query plans when the dependent objects change,
> when the cardinality of parameters changes dramatically, or
> when new ANALYZE statistics are available
Drake:
< and merge the bitmaps to perform the join on the fact table.
> and merge the bitmaps to perform the join on the fact table. Some
> of these algorithms might be patented.
* Allow star join optimizations
While our bitmap scan allows multiple indexes to be joined to get
to heap rows, a star joins allows multiple dimension _tables_ to
be joined to index into a larger main fact table. The join is
usually performed by either creating a cartesian product of all
the dimmension tables and doing a single join on that product or
using subselects to create bitmaps of each dimmension table match
and merge the bitmaps to perform the join on the fact table.
< * Flush cached query plans when the dependent objects change
> * Flush cached query plans when the dependent objects change or
> when the cardinality of parameters changes dramatically
< * %Allow pooled connections to list all prepared queries
> * %Allow pooled connections to list all prepared statements
28c28
< the queries prepared in the current session.
> the statements prepared in the current session.
143c143
< o Allow a warm standby system to also allow read-only queries
> o Allow a warm standby system to also allow read-only statements
404c404
< * Add GUC to issue notice about queries that use unjoined tables
> * Add GUC to issue notice about statements that use unjoined tables
490c490
< Another idea would be to allow actual SELECT queries in a COPY.
> Another idea would be to allow actual SELECT statements in a COPY.
554c554
< o Allow function argument names to be queries from PL/PgSQL
> o Allow function argument names to be statements from PL/PgSQL
591c591
< o Improve psql's handling of multi-line queries
> o Improve psql's handling of multi-line statements
< Currently, while \e saves a single query as one entry, interactive
< queries are saved one line at a time. Ideally all queries
> Currently, while \e saves a single statement as one entry, interactive
> statements are saved one line at a time. Ideally all statements
665c665
< o Allow query results to be automatically batched to the client
> o Allow statement results to be automatically batched to the client
667c667
< Currently, all query results are transfered to the libpq
> Currently, all statement results are transfered to the libpq
672c672
< One complexity is that a query like SELECT 1/col could error
> One complexity is that a statement like SELECT 1/col could error
739c739
< * Allow queries across databases or servers with transaction
> * Allow statements across databases or servers with transaction
< inheritance, allow it to work for UPDATE and DELETE queries, and allow
< it to be used for all queries with little performance impact
> inheritance, allow it to work for UPDATE and DELETE statements, and allow
> it to be used for all statements with little performance impact
876c876
< * Consider automatic caching of queries at various levels:
> * Consider automatic caching of statements at various levels:
947c947
< a single session using multiple threads to execute a query faster.
> a single session using multiple threads to execute a statement faster.
1025c1025
< * Log queries where the optimizer row estimates were dramatically
> * Log statements where the optimizer row estimates were dramatically
1146c1146
< of result sets using new query protocol
> of result sets using new statement protocol
< Win32 API, and we have to make sure MinGW handles it.
> Win32 API, and we have to make sure MinGW handles it. Another
> option is to wait for the MinGW project to fix it, or use the
> code from the LibGW32C project as a guide.
> o Add long file support for binary pg_dump output
>
> While Win32 supports 64-bit files, the MinGW API does not,
> meaning we have to build an fseeko replacement on top of the
> Win32 API, and we have to make sure MinGW handles it.
< be cleared when a heap tuple is expired. Another idea is to maintain
< a bitmap of heap pages where all rows are visible to all backends,
< and allow index lookups to reference that bitmap to avoid heap
< lookups, perhaps the same bitmap we might add someday to determine
< which heap pages need vacuuming.
> be cleared when a heap tuple is expired.
>
> Another idea is to maintain a bitmap of heap pages where all rows
> are visible to all backends, and allow index lookups to reference
> that bitmap to avoid heap lookups, perhaps the same bitmap we might
> add someday to determine which heap pages need vacuuming. Frequently
> accessed bitmaps would have to be stored in shared memory. One 8k
> page of bitmaps could track 512MB of heap pages.
< the heap. One way to allow this is to set a bit to index tuples
> the heap. One way to allow this is to set a bit on index tuples
< be cleared when a heap tuple is expired.
<
> be cleared when a heap tuple is expired. Another idea is to maintain
> a bitmap of heap pages where all rows are visible to all backends,
> and allow index lookups to reference that bitmap to avoid heap
> lookups, perhaps the same bitmap we might add someday to determine
> which heap pages need vacuuming.
< * Add MERGE command that does UPDATE/DELETE, or on failure, INSERT (rules,
< triggers?)
> * Add SQL-standard MERGE command, typically used to merge two tables
>
> This is similar to UPDATE, then for unmatched rows, INSERT.
> Whether concurrent access allows modifications which could cause
> row loss is implementation independent.
>
> * Add REPLACE or UPSERT command that does UPDATE, or on failure, INSERT
< #A hyphen, "-", marks changes that will appear in the upcoming 8.1 release.#
> #A hyphen, "-", marks changes that will appear in the upcoming 8.2 release.#
< so duplicate checking can be easily performed.
> so duplicate checking can be easily performed. It is possible to
> do it without a unique index if we require the user to LOCK the table
> before the MERGE.
< * Add a libpq function to support Parse/DescribeStatement capability
< * Add PQescapeIdentifier() to libpq
< * Prevent PQfnumber() from lowercasing unquoted the column name
<
< PQfnumber() should never have been doing lowercasing, but historically
< it has so we need a way to prevent it
<
648a642,661
>
>
> libpq
>
> o Add a function to support Parse/DescribeStatement capability
> o Add PQescapeIdentifier()
> o Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but
> historically it has so we need a way to prevent it
>
> o Allow query results to be automatically batched to the client
>
> Currently, all query results are transfered to the libpq
> client before libpq makes the results available to the
> application. This feature would allow the application to make
> use of the first result rows while the rest are transfered, or
> held on the server waiting for them to be requested by libpq.
> One complexity is that a query like SELECT 1/col could error
> out mid-way through the result set.
< o Add a GUC variable to allow output of interval values in ISO8601
< format
212a211,223
> o Add a GUC variable to allow output of interval values in ISO8601
> format
> o Improve timestamptz subtraction to be DST-aware
>
> Currently, subtracting one date from another that crosses a
> daylight savings time adjustment can return '1 day 1 hour', but
> adding that back to the first date returns a time one hour in
> the future. This is caused by the adjustment of '25 hours' to
> '1 day 1 hour', and '1 day' is the same time the next day, even
> if daylight savings adjustments are involved.
>
> o Fix interval display to support values exceeding 2^31 hours
> o Add overflow checking to timestamp and interval arithmetic
>
> o Add auto-expanded mode so expanded output is used if the row
> length is wider than the screen width.
>
> Consider using auto-expanded mode for backslash commands like \df+.
> * Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but historically
> it has so we need a way to prevent it
>
< * Prevent libpq's PQfnumber() from lowercasing the column name
<
< One idea is to lowercase all identifiers except those that are
< surrounded by quotes.
<
<
< * Add code to detect an SMP machine and handle spinlocks accordingly
< from distributted.net, http://www1.distributed.net/source,
< in client/common/cpucheck.cpp
<
< On SMP machines, it is possible that locks might be released shortly,
< while on non-SMP machines, the backend should sleep so the process
< holding the lock can complete and release it.
< o %Add dumping of comments on composite type columns
< o %Add dumping of comments on index columns
< o Stop dumping CASCADE on DROP TYPE commands in clean mode
> o %Add dumping of comments on index columns and composite type columns
604a603
> o Stop dumping CASCADE on DROP TYPE commands in clean mode
< * Prevent libpq's PQfnumber() from lowercasing the column name?
> * Prevent libpq's PQfnumber() from lowercasing the column name
>
> One idea is to lowercase all identifiers except those that are
> surrounded by quotes.
>
> o Allow selection of individual object(s) of all types, not just
> tables
> o In a selective dump, allow dumping of an object and all its
> dependencies
< * Consider compressing indexes by storing key prefix values shared by
> * Consider compressing indexes by storing key values duplicated in
735a736,737
>
> This is difficult because it requires datatype-specific knowledge.
> * Allow protocol-level BIND parameter values to be logged
> * Allow protocol-level EXECUTE that is actually a fetch to appear
> in the logs as a fetch rather than another execute
>
> o Display IN, INOUT, and OUT parameters in \df+
>
> It probably requires psql to output newlines in the proper
> column, which is already on the TODO list.
< This would be beneficial when there are few distinct values.
> This would be beneficial when there are few distinct values. This is
> already used by GROUP BY.
946d946
< * Allow DISTINCT to use hashing like GROUP BY
<
390d388
<
453c451
< removed or have its heap and index files truncated. One
> be removed or have its heap and index files truncated. One
< * Use a phantom command counter for nested subtransactions to reduce
< per-tuple overhead
< cmin/cmax pair and is stored in local memory.
> cmin/cmax pair and is stored in local memory. Another idea is to
> store both cmin and cmax only in local memory.
< have its heap and index files truncated. One issue is
< that no other backend should be able to add to the table
< at the same time, which is something that is currently
< allowed.
> removed or have its heap and index files truncated. One
> issue is that no other backend should be able to add to
> the table at the same time, which is something that is
> currently allowed.
> o Allow COPY on a newly-created table to skip WAL logging
450a452,456
> On crash recovery, the table involved in the COPY would
> have its heap and index files truncated. One issue is
> that no other backend should be able to add to the table
> at the same time, which is something that is currently
> allowed.
> * Use UTF8 encoding for NLS messages so all server encodings can
> read them properly
< o %Add support for Unicode
<
< To fix this, the data needs to be converted to/from UTF16/UTF8
< so the Win32 wcscoll() can be used, and perhaps other functions
< like towupper(). However, UTF8 already works with normal
< locales but provides no ordering or character set classes.
< could only see committed rows from another transaction. However,
> could only see rows from another completed transaction. However,
981c981
< proper visibility of the row, for example, for cursors.
> proper visibility of the row's cmin, for example, for cursors.
* Merge xmin/xmax/cmin/cmax back into three header fields
Before subtransactions, there used to be only three fields needed to
store these four values. This was possible because only the current
transaction looks at the cmin/cmax values. If the current transaction
created and expired the row the fields stored where xmin (same as
xmax), cmin, cmax, and if the transaction was expiring a row from a
another transaction, the fields stored were xmin (cmin was not
needed), xmax, and cmax. Such a system worked because a transaction
could only see committed rows from another transaction. However,
subtransactions can see rows from outer transactions, and once the
subtransaction completes, the outer transaction continues, requiring
the storage of all four fields. With subtransactions, an outer
transaction can create a row, a subtransaction expire it, and when the
subtransaction completes, the outer transaction still has to have
proper visibility of the row, for example, for cursors.
One possible solution is to create a phantom cid which represents a
cmin/cmax pair and is stored in local memory.
< * Maintain a map of recently-expired rows
<
< This allows vacuum to target specific pages for possible free space
< without requiring a sequential scan.
<
Update entry:
> One complexity is that index entries still have to be vacuumed, and
> doing this without an index scan (by using the heap values to find the
> index entry) might be slow and unreliable, especially for user-defined
> index functions.
>
> Another issue is whether underlying table changes should be reflected
> in the view, e.g. should SELECT * show additional columns if they
> are added after the view is created.
> o Issue a warning if a change-on-restart-only postgresql.conf value
> is modified and the server config files are reloaded
> o Mark change-on-restart-only values in postgresql.conf
205a209
> o Fix SELECT '0.01 years'::interval, '0.01 months'::interval
>
> Currently, while \e saves a single query as one entry, interactive
> queries are saved one line at a time. Ideally all queries
> whould be saved like \e does.
>
> o Allow multi-line column values to align in the proper columns
>
> If the second output column value is 'a\nb', the 'b' should appear
> in the second display column, rather than the first column as it
> does now.
< in PL/PgSQL is to use EXECUTE.
> in PL/PgSQL is to use EXECUTE. One complexity is that a function
> might itself drop and recreate dependent tables, causing it to
> invalidate its own query plan.
< inheritance, and allow it to work for UPDATE and DELETE queries
> inheritance, allow it to work for UPDATE and DELETE queries, and allow
> it to be used for all queries with little performance impact
< * Allow constraint_elimination to be automatically performed
<
< This requires additional code to reduce the performance loss caused by
< constraint elimination.
< * -Allow limits on per-db/role connections
43d41
< * -Prevent dropping user that still owns objects, or auto-drop the objects
49d46
< * -Add the client IP address and port to pg_stat_activity
< * -Add session start time and last statement time to pg_stat_activity
< * -Add a function that returns the start time of the postmaster
230d224
< o -Allow MIN()/MAX() on arrays
< o -Modify array literal representation to handle array index lower bound
< of other than one
253d244
< * -Add function to return compressed length of TOAST data values
< * -Prevent to_char() on interval from returning meaningless values
<
< For example, to_char('1 month', 'mon') is meaningless. Basically,
< most date-related parameters to to_char() are meaningless for
< intervals because interval is not anchored to a date.
<
< * -Have views on temporary tables exist in the temporary namespace
< * -Allow temporary views on non-temporary tables
329d311
< * -Add BETWEEN SYMMETRIC/ASYMMETRIC
< * -Add E'' escape string marker so eventually ordinary strings can treat
< backslashes literally, for portability
<
< * -Allow additional tables to be specified in DELETE for joins
<
< UPDATE already allows this (UPDATE...FROM) but we need similar
< functionality in DELETE. It's been agreed that the keyword should
< be USING, to avoid anything as confusing as DELETE FROM a FROM b.
<
341d313
< * -Allow REINDEX to rebuild all database indexes
< * -Add an option to automatically use savepoints for each statement in a
< multi-statement transaction.
<
< When enabled, this would allow errors in multi-statement transactions
< to be automatically ignored.
<
426d391
< o -Allow FOR UPDATE queries to do NOWAIT locks
473d437
< o -Allow COPY to understand \x as a hex byte
< o -Allow COPY to optionally include column headings in the first line
< o -Allow COPY FROM ... CSV to interpret newlines and carriage
< returns in data
525d485
< o -Have SHOW ALL show descriptions for server-side variables
< o -Allow PL/PgSQL's RAISE function to take expressions
<
< Currently only constants are supported.
<
< o -Change PL/PgSQL to use palloc() instead of malloc()
545d499
< o -Allow PL/pgSQL EXECUTE query_var INTO record_var;
550d503
< o -Pass arrays natively instead of as text between plperl and postgres
598d550
< o -Add dumping and restoring of LOB comments
638d589
< * -Implement shared row locks and use them in RI triggers
642d592
< * -Allow triggers to be disabled
< * -Add two-phase commit
<
<
< * -Prevent inherited tables from expanding temporary subtables of other
< sessions
< * -Use indexes for MIN() and MAX()
<
< MIN/MAX queries can already be rewritten as SELECT col FROM tab ORDER
< BY col {DESC} LIMIT 1. Completing this item involves doing this
< transformation automatically.
<
< * -Use index to restrict rows returned by multi-key index when used with
< non-consecutive keys to reduce heap accesses
<
< For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and
< col3 = 9, spin though the index checking for col1 and col3 matches,
< rather than just col1; also called skip-scanning.
<
< * -Fetch heap pages matching index entries in sequential order
<
< Rather than randomly accessing heap pages based on index entries, mark
< heap pages needing access in a bitmap and do the lookups in sequential
< order. Another method would be to sort heap ctids matching the index
< before accessing the heap rows.
<
< * -Allow non-bitmap indexes to be combined by creating bitmaps in memory
<
< This feature allows separate indexes to be ANDed or ORed together. This
< is particularly useful for data warehousing applications that need to
< query the database in an many permutations. This feature scans an index
< and creates an in-memory bitmap, and allows that bitmap to be combined
< with other bitmap created in a similar way. The bitmap can either index
< all TIDs, or be lossy, meaning it records just page numbers and each
< page tuple has to be checked for validity in a separate pass.
<
< * -Fix incorrect rtree results due to wrong assumptions about "over"
< operator semantics
782d694
< o -Add concurrency to GIST
813d724
< * -Allow multiple blocks to be written to WAL with one write()
< * -Consider use of open/fcntl(O_DIRECT) to minimize OS caching,
< for WAL writes
<
< O_DIRECT doesn't have the same media write guarantees as fsync, so it
< is in addition to the fsync method, not in place of it.
<
< * -Cache last known per-tuple offsets to speed long tuple access
< * -Allow the size of the buffer cache used by temporary objects to be
< specified as a GUC variable
<
< Larger local buffer cache sizes requires more efficient handling of
< local cache lookups.
<
< * -Improve the background writer
<
< Allow the background writer to more efficiently write dirty buffers
< from the end of the LRU cache and use a clock sweep algorithm to
< write other dirty buffers to reduced checkpoint I/O
<
897d788
< * -Add a warning when the free space map is too small
917d807
< o -Move into the backend code
< * -Make locking of shared data structures more fine-grained
<
< This requires that more locks be acquired but this would reduce lock
< contention, improving concurrency.
<
< * -Improve SMP performance on i386 machines
<
< i386-based SMP machines can generate excessive context switching
< caused by lock failure in high concurrency situations. This may be
< caused by CPU cache line invalidation inefficiencies.
<
979d857
< o -Add ability to turn off full page writes
< * -Eliminate WAL logging for CREATE TABLE AS when not doing WAL archiving
< * -Change WAL to use 32-bit CRC, for performance reasons
<
< * -Use CHECK constraints to influence optimizer decisions
<
< CHECK constraints contain information about the distribution of values
< within the table. This is also useful for implementing subtables where
< a tables content is distributed across several subtables.
<
1045d913
< * -ANALYZE should record a pg_statistic entry for an all-NULL column
1099d966
< * -Remove kerberos4 from source tree
1103d969
< * -Make src/port/snprintf.c thread-safe
1118d983
< * -Add C code on Unix to copy directories for use in creating new databases
1133d997
< o -Improve dlerror() reporting string
< Currently SIGTERM of a backend can lead to lock table corruption.
> Lock table corruption following SIGTERM of an individual backend
> has been reported in 8.0. A possible cause was fixed in 8.1, but
> it is unknown whether other problems exist. This item mostly
> requires additional testing rather than of writing any new code.
< o Allow postgresql.conf values to be set so they can not be changed
< by the user
166c167,171
< * %Remove Money type, add money formatting for decimal type
> * Improve the MONEY data type
>
> Change the MONEY data type to use DECIMAL internally, with special
> locale-aware output formatting.
>
225c230
< o %Allow MIN()/MAX() on arrays
> o -Allow MIN()/MAX() on arrays
228c233
< o Modify array literal representation to handle array index lower bound
> o -Modify array literal representation to handle array index lower bound
235a241
> o Auto-delete large objects when referencing row is deleted
< Currently large objects entries do not have owners. Permissions can
< only be set at the pg_largeobject table level.
> /contrib/lo offers this functionality.
240d244
< o Auto-delete large objects when referencing row is deleted
< * %Have views on temporary tables exist in the temporary namespace
< * Allow temporary views on non-temporary tables
< * %Allow RULE recompilation
> * -Have views on temporary tables exist in the temporary namespace
> * -Allow temporary views on non-temporary tables
> * Allow VIEW/RULE recompilation when the underlying tables change
340a345,347
>
> This is like DELETE CASCADE, but truncates.
>
381c388
< * Make row-wise comparisons work per SQL spec
> * %Make row-wise comparisons work per SQL spec
< o Currently the system uses the operating system COPY command to
< create a new database. Add ON COMMIT capability to CREATE TABLE AS
< SELECT
> o Add ON COMMIT capability to CREATE TABLE AS ... SELECT
427c432
< o %Add ALTER DOMAIN TYPE
> o Add ALTER DOMAIN to modify the underlying data type
< o %Disallow dropping of an inherited constraint
< o -Allow objects to be moved to different schemas
> o Add missing object types for ALTER ... SET SCHEMA
< o %Prevent child tables from altering constraints like CHECK that were
< inherited from the parent table
> o %Disallow dropping of an inherited constraint
> o %Prevent child tables from altering or dropping constraints
> like CHECK that were inherited from the parent table
< o Handle references to temporary tables that are created, destroyed,
< then recreated during a session, and EXECUTE is not used
<
< This requires the cached PL/PgSQL byte code to be invalidated when
< an object referenced in the function is changed.
<
< o Add table function support to pltcl, plperl, plpython?
< o Allow PL/pgSQL to name columns by ordinal position, e.g. rec.(3)
> o Add table function support to pltcl, plpython
549a548
> o Allow function argument names to be queries from PL/PgSQL
< o Pass arrays natively instead of as text between plperl and postgres
< o Add support for polymorphic arguments and return types to plperl
> o -Pass arrays natively instead of as text between plperl and postgres
> o Add support for polymorphic arguments and return types to
> languages other than PL/PgSQL
> o Add support for OUT and INOUT parameters to languages other
> than PL/PgSQL
< * Allow libpq to access SQLSTATE so pg_ctl can test for connection failure
<
< This would be used for checking if the server is up.
<
565c563
< * Have initdb set DateStyle based on locale?
> * Have initdb set the input DateStyle (MDY or DMY) based on locale?
567d564
< * Add a schema option to createlang
< o Add pg_dumpall custom format dumps.
<
< This is probably best done by combining pg_dump and pg_dumpall
< into a single binary.
<
> o Add pg_dumpall custom format dumps?
612c605,606
< o Remove unnecessary abstractions in pg_dump source code
> o Remove unnecessary function pointer abstractions in pg_dump source
> code
< * %Remove CREATE CONSTRAINT TRIGGER
<
< This was used in older releases to dump referential integrity
< constraints.
<
682a672,675
> This is particularly important for references to temporary tables
> in PL/PgSQL because PL/PgSQL caches query plans. The only workaround
> in PL/PgSQL is to use EXECUTE.
>
748c741
< * Fetch heap pages matching index entries in sequential order
> * -Fetch heap pages matching index entries in sequential order
797c790
< Currently no only one hash bucket can be stored on a page. Ideally
> Currently only one hash bucket can be stored on a page. Ideally
806a800,802
> o Add WAL logging for crash recovery
> o Allow multi-column hash indexes
>
812a809,812
>
> Ideally this requires a separate test program that can be run
> at initdb time or optionally later.
>
867c867
< * Improve the background writer
> * -Improve the background writer
< For large table adjustements during vacuum, it is faster to reindex
< rather than update the index.
> For large table adjustements during VACUUM FULL, it is faster to
> reindex rather than update the index.
< * Reduce lock time by moving tuples with read lock, then write
< lock and truncate table
> * Reduce lock time during VACUUM FULL by moving tuples with read lock,
> then write lock and truncate table
919c919,920
< o %Suggest VACUUM FULL if a table is nearly empty
> o %Issue log message to suggest VACUUM FULL if a table is nearly
> empty?
995d995
< * Add WAL index reliability improvement to non-btree indexes
1045c1045
< * ANALYZE should record a pg_statistic entry for an all-NULL column
> * -ANALYZE should record a pg_statistic entry for an all-NULL column
1047a1048,1051
> * Allow constraint_elimination to be automatically performed
>
> This requires additional code to reduce the performance loss caused by
> constraint elimination.
1090c1094
< * Remove memory/file descriptor freeing before ereport(ERROR)
> * %Remove memory/file descriptor freeing before ereport(ERROR)
< * Promote debug_query_string into a server-side function current_query()
< * Allow the identifier length to be increased via a configure option
> * %Promote debug_query_string into a server-side function current_query()
> * %Allow the identifier length to be increased via a configure option
1113d1116
< * Fix cross-compiling of time zone database via 'zic'
1130c1133
< o Improve dlerror() reporting string
> o -Improve dlerror() reporting string
1132c1135
< o Add support for Unicode
> o %Add support for Unicode
< Currently, if a variable is commented out, it keeps the
< previous uncommented value until a server restarted.
> Currently, if a variable is commented out, it keeps the
> previous uncommented value until a server restarted.
> Logically, a reload should set the same values as a
> server restart.
< * Allow triggers to be disabled [trigger]
> * -Allow triggers to be disabled [trigger]
> * Allow triggers to be disabled in only the current session.
< Currently the only way to disable triggers is to modify the system
< tables.
> This is currently possible by starting a multi-statement transaction,
> modifying the system tables, performing the desired SQL, restoring the
> system tables, and committing the transaction. ALTER TABLE ...
> TRIGGER requires a table lock so it is not idea for this usage.
< inheritance
< * Allow enable_constraint_exclusion to work for UPDATE and DELETE queries
> inheritance, and allow it to work for UPDATE and DELETE queries
< o Allow objects to be moved to different schemas
> o -Allow objects to be moved to different schemas
Fix word wrap:
< * Allow GRANT/REVOKE permissions to be applied to all schema objects with one
< command
> o Allow GRANT/REVOKE permissions to be applied to all schema objects
> with one command
< This would require a new global table that is dumped to flat file for
< use by the postmaster. We do a similar thing for pg_shadow currently.
> This would add a function to load the SQL table from
> pg_hba.conf, and one to writes its contents to the flat file.
> The table should have a line number that is a float so rows
> can be inserted between existing rows, e.g. row 2.5 goes
> between row 2 and row 3.
< o Allow postgresql.conf file values to be changed via an SQL API
> o Allow postgresql.conf file values to be changed via an SQL
> API, perhaps using SET GLOBAL
<
> * Allow EXPLAIN to identify tables that were skipped because of
> enable_constraint_exclusion
> * Allow EXPLAIN output to be more easily processed by scripts
760a763
> * Allow enable_constraint_exclusion to work for UPDATE and DELETE queries
> * Add TRUNCATE permission
>
> Currently only the owner can TRUNCATE a table because triggers are not
> called, and the table is locked in exclusive mode.
>
< * Consider use of open/fcntl(O_DIRECT) to minimize OS caching,
< especially for WAL writes
> * -Consider use of open/fcntl(O_DIRECT) to minimize OS caching,
> for WAL writes
< computations should adjust based on the time zone rules, e.g.
< adding 24 hours to a timestamp would yield a different result from
< adding one day.
<
> computations should adjust based on the time zone rules.
< writer.
> writer. It might cause problems for applying WAL on recovery
> into a partially-written page, but later the full page will be
> replaced from WAL.
>
> o -Add ability to turn off full page writes
> o When off, write CRC to WAL and check file system blocks
> on recovery
> o Write full pages during file system write and not when
> the page is modified in the buffer cache
>
> This allows most full page writes to happen in the background
> writer.