postgres/doc/TODO
Bruce Momjian 576bf2f7db Add for Win32:
> 	o Improve dlerror() reporting string
2004-12-02 19:37:58 +00:00

938 lines
34 KiB
Plaintext

TODO list for PostgreSQL
========================
Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
Last updated: Thu Dec 2 14:37:45 EST 2004
The most recent version of this document can be viewed at the PostgreSQL web
site, http://www.PostgreSQL.org.
#A hyphen, "-", marks changes that will appear in the upcoming 8.1 release.#
Bracketed items, "[]", have more detail.
This list contains all known PostgreSQL bugs and feature requests. If
you would like to work on an item, please read the Developer's FAQ
first.
Administration
==============
* Remove behavior of postmaster -o after making postmaster/postgres
flags unique
* Allow limits on per-db/user connections
* Add group object ownership, so groups can rename/drop/grant on objects,
so we can implement roles
* Allow server log information to be output as INSERT statements
This would allow server log information to be easily loaded into
a database for analysis.
* Prevent default re-use of sysids for dropped users and groups
Currently, if a user is removed while he still owns objects, a new
user given might be given their user id and inherit the
previous users objects.
* Prevent dropping user that still owns objects, or auto-drop the objects
* Allow pooled connections to list all prepared queries
This would allow an application inheriting a pooled connection to know
the queries prepared in the current session.
* Allow major upgrades without dump/reload, perhaps using pg_upgrade
* Have SHOW ALL and pg_settings show descriptions for server-side variables
* Allow GRANT/REVOKE permissions to be applied to all schema objects with one
command
* Remove unreferenced table files created by transactions that were
in-progress when the server terminated abruptly
* Allow reporting of which objects are in which tablespaces
This item is difficult because a tablespace can contain objects from
multiple databases. There is a server-side function that returns the
databases which use a specific tablespace, so this requires a tool
that will call that function and connect to each database to find the
objects in each database for that tablespace.
* Allow a database in tablespace t1 with tables created in tablespace t2
to be used as a template for a new database created with default
tablespace t2
All objects in the default database tablespace must have default tablespace
specifications. This is because new databases are created by copying
directories. If you mix default tablespace tables and tablespace-specified
tables in the same directory, creating a new database from such a mixed
directory would create a new database with tables that had incorrect
explicit tablespaces. To fix this would require modifying pg_class in the
newly copied database, which we don't currently do.
* Add a GUC variable to control the tablespace for temporary objects and
sort files
It could start with a random tablespace from a supplied list and cycle
through the list.
* Add "include file" functionality in postgresql.conf
* Add session start time and last statement time to pg_stat_activity
* Allow server logs to be remotely read using SQL commands
* Allow server configuration parameters to be remotely modified
* Allow administrators to safely terminate individual sessions
Right now, SIGTERM will terminate a session, but it is treated as
though the postmaster has paniced and shared memory might not be
cleaned up properly. A new signal is needed for safe termination.
* Un-comment all variables in postgresql.conf
By not showing commented-out variables, we discourage people from
thinking that re-commenting a variable returns it to its default.
This has to address environment variables that are then overridden
by config file values. Another option is to allow commented values
to return to their default values.
* Allow point-in-time recovery to archive partially filled write-ahead
logs
Currently only full WAL files are archived. This means that the most
recent transactions aren't available for recovery in case of a disk
failure.
* Create dump tool for write-ahead logs for use in determining
transaction id for point-in-time recovery
* Set proper permissions on non-system schemas during db creation
Currently all schemas are owned by the super-user because they are
copied from the template1 database.
* Add a function that returns the 'uptime' of the postmaster
* Allow a warm standby system to also allow read-only queries
This is useful for checking PITR recovery.
* Improve replication solutions
o Automatic failover
The proper solution to this will probably the use of a master/slave
replication solution like Sloney and a connection pooling tool like
pgpool.
o Load balancing
You can use any of the master/slave replication servers to use a
standby server for data warehousing. To allow read/write queries to
multiple servers, you need multi-master replication like pgcluster.
o Allow replication over unreliable or non-persistent links
Data Types
==========
* Remove Money type, add money formatting for decimal type
* Change NUMERIC to enforce the maximum precision, and increase it
* Add function to return compressed length of TOAST data values
* Allow INET subnet tests using non-constants to be indexed
* Add transaction_timestamp(), statement_timestamp(), clock_timestamp()
functionality
Current CURRENT_TIMESTAMP returns the start time of the current
transaction, and gettimeofday() returns the wallclock time. This will
make time reporting more consistent and will allow reporting of
the statement start time.
* Have sequence dependency track use of DEFAULT sequences,
seqname.nextval (?)
* Disallow changing default expression of a SERIAL column (?)
* Allow infinite dates just like infinite timestamps
* Have initdb set DateStyle based on locale?
* Add pg_get_acldef(), pg_get_typedefault(), and pg_get_attrdef()
* Allow to_char() to print localized month names
* Allow functions to have a schema search path specified at creation time
* Allow substring/replace() to get/set bit values
* Add a GUC variable to allow output of interval values in ISO8601 format
* Fix data types where equality comparison isn't intuitive, e.g. box
* Merge hardwired timezone names with the TZ database; allow either kind
everywhere a TZ name is currently taken
* Allow customization of the known set of TZ names (generalize the
present australian_timezones hack)
* Allow TIMESTAMP WITH TIME ZONE to store the original timezone
information, either zone name or offset from UTC
If the TIMESTAMP value is stored with a time zone name, interval
computations should adjust based on the time zone rules, e.g. adding
24 hours to a timestamp would yield a different result from adding one
day.
* Prevent INET cast to CIDR if the unmasked bits are not zero, or
zero the bits
* Prevent INET cast to CIDR from droping netmask, SELECT '1.1.1.1'::inet::cidr
* ARRAYS
o Allow NULLs in arrays
o Allow MIN()/MAX() on arrays
o Delay resolution of array expression's data type so assignment
coercion can be performed on empty array expressions
o Modify array literal representation to handle array index lower bound
of other than one
* BINARY DATA
o Improve vacuum of large objects, like /contrib/vacuumlo (?)
o Add security checking for large objects
Currently large objects entries do not have owners. Permissions can
only be set at the pg_largeobject table level.
o Auto-delete large objects when referencing row is deleted
o Allow read/write into TOAST values like large objects
This requires the TOAST column to be stored EXTERNAL.
Multi-Language Support
======================
* Add NCHAR (as distinguished from ordinary varchar),
* Allow locale to be set at database creation
Currently locale can only be set during initdb.
* Allow encoding on a per-column basis
Right now only one encoding is allowed per database.
* Optimize locale to have minimal performance impact when not used
* Support multiple simultaneous character sets, per SQL92
* Improve Unicode combined character handling (?)
* Add octet_length_server() and octet_length_client()
* Make octet_length_client() the same as octet_length()?
Views / Rules
=============
* Automatically create rules on views so they are updateable, per SQL99
We can only auto-create rules for simple views. For more complex
cases users will still have to write rules.
* Add the functionality for WITH CHECK OPTION clause of CREATE VIEW
* Allow NOTIFY in rules involving conditionals
* Have views on temporary tables exist in the temporary namespace
* Allow temporary views on non-temporary tables
* Allow RULE recompilation
Indexes
=======
* Allow inherited tables to inherit index, UNIQUE constraint, and primary
key, foreign key
* UNIQUE INDEX on base column not honored on INSERTs/UPDATEs from
inherited table: INSERT INTO inherit_table (unique_index_col) VALUES
(dup) should fail
The main difficulty with this item is the problem of creating an index
that can span more than one table.
* Add UNIQUE capability to non-btree indexes
* Add rtree index support for line, lseg, path, point
* Use indexes for MIN() and MAX()
MIN/MAX queries can already be rewritten as SELECT col FROM tab ORDER
BY col {DESC} LIMIT 1. Completing this item involves making this
transformation automatically.
* Use index to restrict rows returned by multi-key index when used with
non-consecutive keys to reduce heap accesses
For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and
col3 = 9, spin though the index checking for col1 and col3 matches,
rather than just col1; also called skip-scanning.
* Prevent index uniqueness checks when UPDATE does not modify the column
Uniqueness (index) checks are done when updating a column even if the
column is not modified by the UPDATE.
* Fetch heap pages matching index entries in sequential order
Rather than randomly accessing heap pages based on index entries, mark
heap pages needing access in a bitmap and do the lookups in sequential
order. Another method would be to sort heap ctids matching the index
before accessing the heap rows.
* Allow non-bitmap indexes to be combined by creating bitmaps in memory
Bitmap indexes index single columns that can be combined with other bitmap
indexes to dynamically create a composite index to match a specific query.
Each index is a bitmap, and the bitmaps are bitwise AND'ed or OR'ed to be
combined. They can index by tid or can be lossy requiring a scan of the
heap page to find matching rows, or perhaps use a mixed solution where
tids are recorded for pages with only a few matches and per-page bitmaps
are used for more dense pages. Another idea is to use a 32-bit bitmap
for every page and set a bit based on the item number mod(32).
* Allow the creation of on-disk bitmap indexes which can be quickly
combined with other bitmap indexes
Such indexes could be more compact if there are only a few distinct values.
Such indexes can also be compressed. Keeping such indexes updated can be
costly.
* Allow use of indexes to search for NULLs
One solution is to create a partial index on an IS NULL expression.
* Add concurrency to GIST
* Pack hash index buckets onto disk pages more efficiently
Currently no only one hash bucket can be stored on a page. Ideally
several hash buckets could be stored on a single page and greater
granularity used for the hash algorithm.
* Allow accurate statistics to be collected on indexes with more than
one column or expression indexes, perhaps using per-index statistics
* Add fillfactor to control reserved free space during index creation
Commands
========
* Add BETWEEN ASYMMETRIC/SYMMETRIC
* Change LIMIT/OFFSET to use int8
* Allow CREATE TABLE AS to determine column lengths for complex
expressions like SELECT col1 || col2
* Allow UPDATE to handle complex aggregates [update] (?)
* Allow backslash handling in quoted strings to be disabled for portability
The use of C-style backslashes (.e.g. \n, \r) in quoted strings is not
SQL-spec compliant, so allow such handling to be disabled.
* Allow an alias to be provided for the target table in UPDATE/DELETE
This is not SQL-spec but many DBMSs allow it.
* Allow additional tables to be specified in DELETE for joins
UPDATE already allows this (UPDATE...FROM) but we need similar
functionality in DELETE. It's been agreed that the keyword should
be USING, to avoid anything as confusing as DELETE FROM a FROM b.
* Add CORRESPONDING BY to UNION/INTERSECT/EXCEPT
* Allow REINDEX to rebuild all database indexes, remove /contrib/reindex
* Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY
* Add a schema option to createlang
* Allow UPDATE tab SET ROW (col, ...) = (...) for updating multiple columns
* Allow SET CONSTRAINTS to be qualified by schema/table name
* Allow TRUNCATE ... CASCADE/RESTRICT
* Allow PREPARE of cursors
* Allow PREPARE to automatically determine parameter types based on the SQL
statement
* Allow finer control over the caching of prepared query plans
Currently, queries prepared via the libpq API are planned on first
execute using the supplied parameters --- allow SQL PREPARE to do the
same. Also, allow control over replanning prepared queries either
manually or automatically when statistics for execute parameters
differ dramatically from those used during planning.
* Allow LISTEN/NOTIFY to store info in memory rather than tables?
Currently LISTEN/NOTIFY information is stored in pg_listener. Storing
such information in memory would improve performance.
* Dump large object comments in custom dump format
* Add optional textual message to NOTIFY
This would allow an informational message to be added to the notify
message, perhaps indicating the row modified or other custom
information.
* Use more reliable method for CREATE DATABASE to get a consistent copy
of db?
Currently the system uses the operating system COPY command to create
a new database.
* Add C code to copy directories for use in creating new databases
* Have pg_ctl look at PGHOST in case it is a socket directory?
* Allow column-level GRANT/REVOKE privileges
* Add a GUC variable to warn about non-standard SQL usage in queries
* Add MERGE command that does UPDATE/DELETE, or on failure, INSERT (rules,
triggers?)
* Add ON COMMIT capability to CREATE TABLE AS SELECT
* Add NOVICE output level for helpful messages like automatic sequence/index
creation
* Add COMMENT ON for all cluster global objects (users, groups, databases
and tablespaces)
* Add an option to automatically use savepoints for each statement in a
multi-statement transaction.
When enabled, this would allow errors in multi-statement transactions
to be automatically ignored.
* Make row-wise comparisons work per SQL spec
* Add RESET CONNECTION command to reset all session state
This would include resetting of all variables (RESET ALL), dropping of
all temporary tables, removal of any NOTIFYs, etc. This could be used
for connection pooling. We could also change RESET ALL to have this
functionality.
* Allow FOR UPDATE queries to do NOWAIT locks
* ALTER
o Have ALTER TABLE RENAME rename SERIAL sequence names
o Add ALTER DOMAIN TYPE
o Allow ALTER TABLE ... ALTER CONSTRAINT ... RENAME
o Allow ALTER TABLE to change constraint deferrability and actions
o Disallow dropping of an inherited constraint
o Allow objects to be moved to different schemas
o Allow ALTER TABLESPACE to move to different directories
o Allow databases and schemas to be moved to different tablespaces
One complexity is whether moving a schema should move all existing
schema objects or just define the location for future object creation.
o Allow moving system tables to other tablespaces, where possible
Currently non-global system tables must be in the default database
schema. Global system tables can never be moved.
* CLUSTER
o Automatically maintain clustering on a table
This might require some background daemon to maintain clustering
during periods of low usage. It might also require tables to be only
paritally filled for easier reorganization. Another idea would
be to create a merged heap/index data file so an index lookup would
automatically access the heap data too. A third idea would be to
store heap rows in hashed groups, perhaps using a user-supplied
hash function.
o Add default clustering to system tables
To do this, determine the ideal cluster index for each system
table and set the cluster setting during initdb.
* COPY
o Allow COPY to report error lines and continue
This requires the use of a savepoint before each COPY line is
processed, with ROLLBACK on COPY failure.
o Allow COPY to understand \x as a hex byte
o Have COPY return the number of rows loaded/unloaded (?)
o Allow COPY to optionally include column headings in the first line
o Allow COPY FROM ... CVS to interpret newlines and carriage
returns in data
This would require major refactoring of the copy source code.
* CURSOR
o Allow UPDATE/DELETE WHERE CURRENT OF cursor
This requires using the row ctid to map cursor rows back to the
original heap row. This become more complicated if WITH HOLD cursors
are to be supported because WITH HOLD cursors have a copy of the row
and no FOR UPDATE lock.
o Prevent DROP TABLE from dropping a row referenced by its own open
cursor (?)
o Allow pooled connections to list all open WITH HOLD cursors
Because WITH HOLD cursors exist outside transactions, this allows
them to be listed so they can be closed.
* INSERT
o Allow INSERT/UPDATE of the system-generated oid value for a row
o Allow INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..)
o Allow INSERT/UPDATE ... RETURNING new.col or old.col
This is useful for returning the auto-generated key for an INSERT.
One complication is how to handle rules that run as part of
the insert.
* SHOW/SET
o Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM
ANALYZE, and CLUSTER
o Add SET PATH for schemas (?)
This is basically the same as SET search_path.
o Prevent conflicting SET options from being set
This requires a checking function to be called after the server
configuration file is read.
* SERVER-SIDE LANGUAGES
o Allow PL/PgSQL's RAISE function to take expressions (?)
Currently only constants are supported.
o Change PL/PgSQL to use palloc() instead of malloc()
o Handle references to temporary tables that are created, destroyed,
then recreated during a session, and EXECUTE is not used
This requires the cached PL/PgSQL byte code to be invalidated when
an object referenced in the function is changed.
o Fix PL/pgSQL RENAME to work on variables other than OLD/NEW
o Allow function parameters to be passed by name,
get_employee_salary(emp_id => 12345, tax_year => 2001)
o Add Oracle-style packages
o Add table function support to pltcl, plperl, plpython (?)
o Allow PL/pgSQL to name columns by ordinal position, e.g. rec.(3)
o Allow PL/pgSQL EXECUTE query_var INTO record_var;
o Add capability to create and call PROCEDURES
o Allow PL/pgSQL to handle %TYPE arrays, e.g. tab.col%TYPE[]
Clients
=======
* Add XML output to pg_dump and COPY
We already allow XML to be stored in the database, and XPath queries
can be used on that data using /contrib/xml2. It also supports XSLT
transformations.
* Add a libpq function to support Parse/DescribeStatement capability
* Prevent libpq's PQfnumber() from lowercasing the column name (?)
* Allow libpq to access SQLSTATE so pg_ctl can test for connection failure
This would be used for checking if the server is up.
* Have psql show current values for a sequence
* Move psql backslash database information into the backend, use mnemonic
commands? [psql]
This would allow non-psql clients to pull the same information out of
the database as psql.
* Fix psql's display of schema information (Neil)
* Consistently display privilege information for all objects in psql
* pg_dump
o Have pg_dump use multi-statement transactions for INSERT dumps
o Allow pg_dump to use multiple -t and -n switches
This should be done by allowing a '-t schema.table' syntax.
o Add dumping of comments on composite type columns
o Add dumping of comments on index columns
o Replace crude DELETE FROM method of pg_dumpall for cleaning of
users and groups with separate DROP commands
o Add dumping and restoring of LOB comments
o Stop dumping CASCADE on DROP TYPE commands in clean mode
o Add full object name to the tag field. eg. for operators we need
'=(integer, integer)', instead of just '='.
o Add pg_dumpall custom format dumps.
This is probably best done by combining pg_dump and pg_dumpall
into a single binary.
o Add CSV output format
* ECPG
o Docs
Document differences between ecpg and the SQL standard and
information about the Informix-compatibility module.
o Solve cardinality > 1 for input descriptors / variables (?)
o Add a semantic check level, e.g. check if a table really exists
o fix handling of DB attributes that are arrays
o Use backend PREPARE/EXECUTE facility for ecpg where possible
o Implement SQLDA
o Fix nested C comments
o sqlwarn[6] should be 'W' if the PRECISION or SCALE value specified
o Make SET CONNECTION thread-aware, non-standard?
o Allow multidimensional arrays
Referential Integrity
=====================
* Add MATCH PARTIAL referential integrity
* Add deferred trigger queue file
Right now all deferred trigger information is stored in backend
memory. This could exhaust memory for very large trigger queues.
This item involves dumping large queues into files.
* Implement dirty reads or shared row locks and use them in RI triggers
Adding shared locks requires recording the table/rows numbers in a
shared area, and this could potentially be a large amount of data.
One idea is to store the table/row numbers in a separate table and set
a bit on the row indicating looking in this new table is required to
find any shared row locks.
* Enforce referential integrity for system tables
* Change foreign key constraint for array -> element to mean element
in array (?)
* Allow DEFERRABLE UNIQUE constraints (?)
* Allow triggers to be disabled [trigger]
Currently the only way to disable triggers is to modify the system
tables.
* With disabled triggers, allow pg_dump to use ALTER TABLE ADD FOREIGN KEY
If the dump is known to be valid, allow foreign keys to be added
without revalidating the data.
* Allow statement-level triggers to access modified rows
* Support triggers on columns
* Remove CREATE CONSTRAINT TRIGGER
This was used in older releases to dump referential integrity
constraints.
* Allow AFTER triggers on system tables
System tables are modified in many places in the backend without going
through the executor and therefore not causing triggers to fire. To
complete this item, the functions that modify system tables will have
to fire triggers.
Dependency Checking
===================
* Flush cached query plans when the dependent objects change
* Track dependencies in function bodies and recompile/invalidate
Exotic Features
===============
* Add SQL99 WITH clause to SELECT
* Add SQL99 WITH RECURSIVE to SELECT
* Add pre-parsing phase that converts non-ANSI syntax to supported
syntax
This could allow SQL written for other databases to run without
modification.
* Allow plug-in modules to emulate features from other databases
* SQL*Net listener that makes PostgreSQL appear as an Oracle database
to clients
* Allow queries across databases or servers with transaction
semantics
Right now contrib/dblink can be used to issue such queries except it
does not have locking or transaction semantics. Two-phase commit is
needed to enable transaction semantics.
* Add two-phase commit
This will involve adding a way to respond to commit failure by either
taking the server into offline/readonly mode or notifying the
administrator
PERFORMANCE
===========
Fsync
=====
* Improve commit_delay handling to reduce fsync()
* Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options
* Allow multiple blocks to be written to WAL with one write()
* Add an option to sync() before fsync()'ing checkpoint files
Cache
=====
* Allow free-behind capability for large sequential scans, perhaps using
posix_fadvise()
Posix_fadvise() can control both sequential/random file caching and
free-behind behavior, but it is unclear how the setting affects other
backends that also have the file open, and the feature is not supported
on all operating systems.
* Consider use of open/fcntl(O_DIRECT) to minimize OS caching
* Cache last known per-tuple offsets to speed long tuple access
While column offsets are already cached, the cache can not be used if
the tuple has NULLs or TOAST columns because these values change the
typical column offsets. Caching of such offsets could be accomplished
by remembering the previous offsets and use them again if the row has
the same pattern.
* Speed up COUNT(*)
We could use a fixed row count and a +/- count to follow MVCC
visibility rules, or a single cached value could be used and
invalidated if anyone modifies the table.
* Consider automatic caching of queries at various levels:
o Parsed query tree
o Query execute plan
o Query results
Vacuum
======
* Improve speed with indexes
For large table adjustements during vacuum, it is faster to reindex
rather than update the index.
* Reduce lock time by moving tuples with read lock, then write
lock and truncate table
Moved tuples are invisible to other backends so they don't require a
write lock. However, the read lock promotion to write lock could lead
to deadlock situations.
* Allow free space map to be auto-sized or warn when it is too small
The free space map is in shared memory so resizing is difficult.
* Maintain a map of recently-expired rows
This allows vacuum to reclaim free space without requiring
a sequential scan
* Auto-vacuum
o Move into the backend code
o Scan the buffer cache to find free space or use background writer
o Use free-space map information to guide refilling
Locking
=======
* Make locking of shared data structures more fine-grained
This requires that more locks be acquired but this would reduce lock
contention, improving concurrency.
* Add code to detect an SMP machine and handle spinlocks accordingly
from distributted.net, http://www1.distributed.net/source,
in client/common/cpucheck.cpp
On SMP machines, it is possible that locks might be released shortly,
while on non-SMP machines, the backend should sleep so the process
holding the lock can complete and release it.
* Improve SMP performance on i386 machines
i386-based SMP machines can generate excessive context switching
caused by lock failure in high concurrency situations. This may be
caused by CPU cache line invalidation inefficiencies.
* Research use of sched_yield() for spinlock acquisition failure
Startup Time
============
* Experiment with multi-threaded backend [thread]
This would prevent the overhead associated with process creation. Most
operating systems have trivial process creation time compared to
database startup overhead, but a few operating systems (WIn32,
Solaris) might benefit from threading.
* Add connection pooling
It is unclear if this should be done inside the backend code or done
by something external like pgpool. The passing of file descriptors to
existing backends is one of the difficulties with a backend approach.
Write-Ahead Log
===============
* Eliminate need to write full pages to WAL before page modification [wal]
Currently, to protect against partial disk page writes, we write the
full page images to WAL before they are modified so we can correct any
partial page writes during recovery. These pages can also be
eliminated from point-in-time archive files.
* Reduce WAL traffic so only modified values are written rather than
entire rows (?)
* Turn off after-change writes if fsync is disabled
If fsync is off, there is no purpose in writing full pages to WAL
* Add WAL index reliability improvement to non-btree indexes
* Allow the pg_xlog directory location to be specified during initdb
with a symlink back to the /data location
* Allow WAL information to recover corrupted pg_controldata
* Find a way to reduce rotational delay when repeatedly writing
last WAL page
Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.
* Allow buffered WAL writes and fsync
Instead of guaranteeing recovery of all committed transactions, this
would provide improved performance by delaying WAL writes and fsync
so an abrupt operating system restart might lose a few seconds of
committed transactions but still be consistent. We could perhaps
remove the 'fsync' parameter (which results in an an inconsistent
database) in favor of this capability.
* Eliminate WAL logging for CREATE TABLE AS when not doing WAL archiving
Optimizer / Executor
====================
* Add missing optimizer selectivities for date, r-tree, etc
* Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
index using a sequential scan for highest/lowest values
If only one value is needed, there is no need to sort the entire
table. Instead a sequential scan could get the matching value.
* Precompile SQL functions to avoid overhead
* Create utility to compute accurate random_page_cost value
* Improve ability to display optimizer analysis using OPTIMIZER_DEBUG
* Have EXPLAIN ANALYZE highlight poor optimizer estimates
* Use CHECK constraints to influence optimizer decisions
CHECK constraints contain information about the distribution of values
within the table. This is also useful for implementing subtables where
a tables content is distributed across several subtables.
* Consider using hash buckets to do DISTINCT, rather than sorting
This would be beneficial when there are few distinct values.
Miscellaneous
=============
* Do async I/O for faster random read-ahead of data
Async I/O allows multiple I/O requests to be sent to the disk with
results coming back asynchronously.
* Use mmap() rather than SYSV shared memory or to write WAL files (?)
This would remove the requirement for SYSV SHM but would introduce
portability issues. Anonymous mmap (or mmap to /dev/zero) is required
to prevent I/O overhead.
* Consider mmap()'ing files into a backend?
Doing I/O to large tables would consume a lot of address space or
require frequent mapping/unmapping. Extending the file also causes
mapping problems that might require mapping only individual pages,
leading to thousands of mappings. Another problem is that there is no
way to _prevent_ I/O to disk from the dirty shared buffers so changes
could hit disk before WAL is written.
* Add a script to ask system configuration questions and tune postgresql.conf
* Use a phantom command counter for nested subtransactions to reduce
per-tuple overhead
* Consider parallel processing a single query
This would involve using multiple threads or processes to do optimization,
sorting, or execution of single query. The major advantage of such a
feature would be to allow multiple CPUs to work together to process a
single query.
* Research the use of larger page sizes
Source Code
===========
* Add use of 'const' for variables in source tree
* Rename some /contrib modules from pg* to pg_*
* Move some things from /contrib into main tree
* Move some /contrib modules out to their own project sites
* Remove warnings created by -Wcast-align
* Move platform-specific ps status display info from ps_status.c to ports
* Add optional CRC checksum to heap and index pages
* Improve documentation to build only interfaces (Marc)
* Remove or relicense modules that are not under the BSD license, if possible
* Remove memory/file descriptor freeing before ereport(ERROR)
* Acquire lock on a relation before building a relcache entry for it
* Promote debug_query_string into a server-side function current_query()
* Allow the identifier length to be increased via a configure option
* Remove Win32 rename/unlink looping if unnecessary
* Remove kerberos4 from source tree?
* Allow cross-compiling by generating the zic database on the target system
* Improve NLS maintenace of libpgport messages linked onto applications
* Allow ecpg to work with MSVC and BCC
* Win32
o Remove per-backend parameter file and move into shared memory
o Remove configure.in check for link failure when cause is found
o Remove readdir() errno patch when runtime/mingwex/dirent.c rev
1.4 is released
o Remove psql newline patch when we find out why mingw outputs an
extra newline
o Allow psql to use readline once non-US code pages work with
backslashes
o Re-enable timezone output on log_line_prefix '%t' when a
shorter timezone string is available
o Improve dlerror() reporting string
* Wire Protocol Changes
o Allow dynamic character set handling
o Add decoded type, length, precision
o Use compression?
o Update clients to use data types, typmod, schema.table.column names
of result sets using new query protocol
---------------------------------------------------------------------------
Developers who have claimed items are:
--------------------------------------
* Alvaro is Alvaro Herrera <alvherre@dcc.uchile.cl>
* Andrew is Andrew Dunstan <andrew@dunslane.net>
* Bruce is Bruce Momjian <pgman@candle.pha.pa.us> of Software Research Assoc.
* Christopher is Christopher Kings-Lynne <chriskl@familyhealth.com.au> of
Family Health Network
* Claudio is Claudio Natoli <claudio.natoli@memetrics.com>
* D'Arcy is D'Arcy J.M. Cain <darcy@druid.net> of The Cain Gang Ltd.
* Fabien is Fabien Coelho <coelho@cri.ensmp.fr>
* Gavin is Gavin Sherry <swm@linuxworld.com.au> of Alcove Systems Engineering
* Greg is Greg Sabino Mullane <greg@turnstep.com>
* Hiroshi is Hiroshi Inoue <Inoue@tpf.co.jp>
* Jan is Jan Wieck <JanWieck@Yahoo.com> of Afilias, Inc.
* Joe is Joe Conway <mail@joeconway.com>
* Karel is Karel Zak <zakkr@zf.jcu.cz>
* Magnus is Magnus Hagander <mha@sollentuna.net>
* Marc is Marc Fournier <scrappy@hub.org> of PostgreSQL, Inc.
* Matthew T. O'Connor <matthew@zeut.net>
* Michael is Michael Meskes <meskes@postgresql.org> of Credativ
* Neil is Neil Conway <neilc@samurai.com>
* Oleg is Oleg Bartunov <oleg@sai.msu.su>
* Peter is Peter Eisentraut <peter_e@gmx.net>
* Philip is Philip Warner <pjw@rhyme.com.au> of Albatross Consulting Pty. Ltd.
* Rod is Rod Taylor <pg@rbt.ca>
* Simon is Simon Riggs <simon@2ndquadrant.com>
* Stephan is Stephan Szabo <sszabo@megazone23.bigpanda.com>
* Tatsuo is Tatsuo Ishii <t-ishii@sra.co.jp> of Software Research Assoc.
* Tom is Tom Lane <tgl@sss.pgh.pa.us> of Red Hat