589b67d34b
> > o Allow pg_hba.conf to specify host names along with IP addresses > > Host name lookup could occur when the postmaster reads the > pg_hba.conf file, or when the backend starts. Another > solution would be to reverse lookup the connection IP and > check that hostname against the host names in pg_hba.conf. > We could also then check that the host name maps to the IP > address.
1232 lines
46 KiB
Plaintext
1232 lines
46 KiB
Plaintext
|
|
PostgreSQL TODO List
|
|
====================
|
|
Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
|
|
Last updated: Sun Feb 12 22:42:37 EST 2006
|
|
|
|
The most recent version of this document can be viewed at
|
|
http://www.postgresql.org/docs/faqs.TODO.html.
|
|
|
|
#A hyphen, "-", marks changes that will appear in the upcoming 8.2 release.#
|
|
#A percent sign, "%", marks items that are easier to implement.#
|
|
|
|
Bracketed items, "[]", have more detail.
|
|
|
|
This list contains all known PostgreSQL bugs and feature requests. If
|
|
you would like to work on an item, please read the Developer's FAQ
|
|
first.
|
|
|
|
|
|
Administration
|
|
==============
|
|
|
|
* %Remove behavior of postmaster -o
|
|
* -%Allow pooled connections to list all prepared statements
|
|
|
|
This would allow an application inheriting a pooled connection to know
|
|
the statements prepared in the current session.
|
|
|
|
* Allow major upgrades without dump/reload, perhaps using pg_upgrade
|
|
[pg_upgrade]
|
|
* Check for unreferenced table files created by transactions that were
|
|
in-progress when the server terminated abruptly
|
|
* Allow administrators to safely terminate individual sessions either
|
|
via an SQL function or SIGTERM
|
|
|
|
Lock table corruption following SIGTERM of an individual backend
|
|
has been reported in 8.0. A possible cause was fixed in 8.1, but
|
|
it is unknown whether other problems exist. This item mostly
|
|
requires additional testing rather than of writing any new code.
|
|
|
|
* %Set proper permissions on non-system schemas during db creation
|
|
|
|
Currently all schemas are owned by the super-user because they are
|
|
copied from the template1 database.
|
|
|
|
* Support table partitioning that allows a single table to be stored
|
|
in subtables that are partitioned based on the primary key or a WHERE
|
|
clause
|
|
* Add function to report the time of the most recent server reload
|
|
* Allow statistics collector information to be pulled from the collector
|
|
process directly, rather than requiring the collector to write a
|
|
filesystem file twice a second?
|
|
|
|
|
|
* Improve replication solutions
|
|
|
|
o Load balancing
|
|
|
|
You can use any of the master/slave replication servers to use a
|
|
standby server for data warehousing. To allow read/write queries to
|
|
multiple servers, you need multi-master replication like pgcluster.
|
|
|
|
o Allow replication over unreliable or non-persistent links
|
|
|
|
|
|
* Configuration files
|
|
|
|
o %Add "include file" functionality in postgresql.conf
|
|
o %Allow commenting of variables in postgresql.conf to restore them
|
|
to defaults
|
|
|
|
Currently, if a variable is commented out, it keeps the
|
|
previous uncommented value until a server restarted.
|
|
|
|
o %Allow pg_hba.conf settings to be controlled via SQL
|
|
|
|
This would add a function to load the SQL table from
|
|
pg_hba.conf, and one to writes its contents to the flat file.
|
|
The table should have a line number that is a float so rows
|
|
can be inserted between existing rows, e.g. row 2.5 goes
|
|
between row 2 and row 3.
|
|
|
|
o Allow pg_hba.conf to specify host names along with IP addresses
|
|
|
|
Host name lookup could occur when the postmaster reads the
|
|
pg_hba.conf file, or when the backend starts. Another
|
|
solution would be to reverse lookup the connection IP and
|
|
check that hostname against the host names in pg_hba.conf.
|
|
We could also then check that the host name maps to the IP
|
|
address.
|
|
|
|
o %Allow postgresql.conf file values to be changed via an SQL
|
|
API, perhaps using SET GLOBAL
|
|
o Allow the server to be stopped/restarted via an SQL API
|
|
o -Issue a warning if a change-on-restart-only postgresql.conf value
|
|
is modified and the server config files are reloaded
|
|
o Mark change-on-restart-only values in postgresql.conf
|
|
|
|
|
|
* Tablespaces
|
|
|
|
* Allow a database in tablespace t1 with tables created in
|
|
tablespace t2 to be used as a template for a new database created
|
|
with default tablespace t2
|
|
|
|
All objects in the default database tablespace must have default
|
|
tablespace specifications. This is because new databases are
|
|
created by copying directories. If you mix default tablespace
|
|
tables and tablespace-specified tables in the same directory,
|
|
creating a new database from such a mixed directory would create a
|
|
new database with tables that had incorrect explicit tablespaces.
|
|
To fix this would require modifying pg_class in the newly copied
|
|
database, which we don't currently do.
|
|
|
|
* Allow reporting of which objects are in which tablespaces
|
|
|
|
This item is difficult because a tablespace can contain objects
|
|
from multiple databases. There is a server-side function that
|
|
returns the databases which use a specific tablespace, so this
|
|
requires a tool that will call that function and connect to each
|
|
database to find the objects in each database for that tablespace.
|
|
|
|
o %Add a GUC variable to control the tablespace for temporary objects
|
|
and sort files
|
|
|
|
It could start with a random tablespace from a supplied list and
|
|
cycle through the list.
|
|
|
|
o Allow WAL replay of CREATE TABLESPACE to work when the directory
|
|
structure on the recovery computer is different from the original
|
|
|
|
o Allow per-tablespace quotas
|
|
|
|
|
|
* Point-In-Time Recovery (PITR)
|
|
|
|
o Allow point-in-time recovery to archive partially filled
|
|
write-ahead logs [pitr]
|
|
|
|
Currently only full WAL files are archived. This means that the
|
|
most recent transactions aren't available for recovery in case
|
|
of a disk failure. This could be triggered by a user command or
|
|
a timer.
|
|
|
|
o Automatically force archiving of partially-filled WAL files when
|
|
pg_stop_backup() is called or the server is stopped
|
|
|
|
Doing this will allow administrators to know more easily when
|
|
the archive contains all the files needed for point-in-time
|
|
recovery.
|
|
|
|
o %Create dump tool for write-ahead logs for use in determining
|
|
transaction id for point-in-time recovery
|
|
o Allow a warm standby system to also allow read-only statements
|
|
[pitr]
|
|
|
|
This is useful for checking PITR recovery.
|
|
|
|
o Allow the PITR process to be debugged and data examined
|
|
|
|
|
|
Monitoring
|
|
==========
|
|
|
|
* Allow server log information to be output as INSERT statements
|
|
|
|
This would allow server log information to be easily loaded into
|
|
a database for analysis.
|
|
|
|
* %Add ability to monitor the use of temporary sort files
|
|
* Allow server logs to be remotely read and removed using SQL commands
|
|
* Allow protocol-level BIND parameter values to be logged
|
|
|
|
|
|
Data Types
|
|
==========
|
|
|
|
* Improve the MONEY data type
|
|
|
|
Change the MONEY data type to use DECIMAL internally, with special
|
|
locale-aware output formatting.
|
|
|
|
* Change NUMERIC to enforce the maximum precision
|
|
* Add NUMERIC division operator that doesn't round?
|
|
|
|
Currently NUMERIC _rounds_ the result to the specified precision.
|
|
This means division can return a result that multiplied by the
|
|
divisor is greater than the dividend, e.g. this returns a value > 10:
|
|
|
|
SELECT (10::numeric(2,0) / 6::numeric(2,0))::numeric(2,0) * 6;
|
|
|
|
The positive modulus result returned by NUMERICs might be considered
|
|
inaccurate, in one sense.
|
|
|
|
* %Disallow changing default expression of a SERIAL column
|
|
* Fix data types where equality comparison isn't intuitive, e.g. box
|
|
* -Zero umasked bits in conversion from INET cast to CIDR
|
|
* -Prevent INET cast to CIDR from dropping netmask, SELECT '1.1.1.1'::inet::cidr
|
|
* -Allow INET + INT8 to increment the host part of the address or
|
|
throw an error on overflow
|
|
* %Add 'tid != tid ' operator for use in corruption recovery
|
|
* Allow user-defined types to specify a type modifier at table creation
|
|
time
|
|
|
|
|
|
* Dates and Times
|
|
|
|
o Allow infinite dates just like infinite timestamps
|
|
o Merge hardwired timezone names with the TZ database; allow either
|
|
kind everywhere a TZ name is currently taken
|
|
o Allow customization of the known set of TZ names (generalize the
|
|
present australian_timezones hack)
|
|
o Allow TIMESTAMP WITH TIME ZONE to store the original timezone
|
|
information, either zone name or offset from UTC [timezone]
|
|
|
|
If the TIMESTAMP value is stored with a time zone name, interval
|
|
computations should adjust based on the time zone rules.
|
|
|
|
o Fix SELECT '0.01 years'::interval, '0.01 months'::interval
|
|
o Fix SELECT INTERVAL '1' MONTH
|
|
o Add a GUC variable to allow output of interval values in ISO8601
|
|
format
|
|
o Improve timestamptz subtraction to be DST-aware
|
|
|
|
Currently, subtracting one date from another that crosses a
|
|
daylight savings time adjustment can return '1 day 1 hour', but
|
|
adding that back to the first date returns a time one hour in
|
|
the future. This is caused by the adjustment of '25 hours' to
|
|
'1 day 1 hour', and '1 day' is the same time the next day, even
|
|
if daylight savings adjustments are involved.
|
|
|
|
o Fix interval display to support values exceeding 2^31 hours
|
|
o Add overflow checking to timestamp and interval arithmetic
|
|
o Add ISO INTERVAL handling
|
|
o Add support for day-time syntax, INTERVAL '1 2:03:04' DAY TO
|
|
SECOND
|
|
o Add support for year-month syntax, INTERVAL '50-6' YEAR TO MONTH
|
|
o For syntax that isn't uniquely ISO or PG syntax, like '1:30' or
|
|
'1', treat as ISO if there is a range specification clause,
|
|
and as PG if there no clause is present, e.g. interpret
|
|
'1:30' MINUTE TO SECOND as '1 minute 30 seconds', and
|
|
interpret '1:30' as '1 hour, 30 minutes'
|
|
o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS
|
|
INTERVAL MONTH), and this should return '12 months'
|
|
o Round or truncate values to the requested precision, e.g.
|
|
INTERVAL '11 months' AS YEAR should return one or zero
|
|
o Support precision, CREATE TABLE foo (a INTERVAL MONTH(3))
|
|
|
|
|
|
* Arrays
|
|
|
|
o -Allow NULLs in arrays
|
|
o Delay resolution of array expression's data type so assignment
|
|
coercion can be performed on empty array expressions
|
|
|
|
|
|
* Binary Data
|
|
|
|
o Improve vacuum of large objects, like /contrib/vacuumlo?
|
|
o Add security checking for large objects
|
|
o Auto-delete large objects when referencing row is deleted
|
|
|
|
/contrib/lo offers this functionality.
|
|
|
|
o Allow read/write into TOAST values like large objects
|
|
|
|
This requires the TOAST column to be stored EXTERNAL.
|
|
|
|
|
|
Functions
|
|
=========
|
|
|
|
* Allow INET subnet tests using non-constants to be indexed
|
|
* Add transaction_timestamp(), statement_timestamp(), clock_timestamp()
|
|
functionality
|
|
|
|
Current CURRENT_TIMESTAMP returns the start time of the current
|
|
transaction, and gettimeofday() returns the wallclock time. This will
|
|
make time reporting more consistent and will allow reporting of
|
|
the statement start time.
|
|
|
|
* %Add pg_get_acldef(), pg_get_typedefault(), pg_get_attrdef(),
|
|
pg_get_tabledef(), pg_get_domaindef(), pg_get_functiondef()
|
|
* -Allow to_char() to print localized month names
|
|
* Allow functions to have a schema search path specified at creation time
|
|
* Allow substring/replace() to get/set bit values
|
|
* Allow to_char() on interval values to accumulate the highest unit
|
|
requested
|
|
|
|
Some special format flag would be required to request such
|
|
accumulation. Such functionality could also be added to EXTRACT.
|
|
Prevent accumulation that crosses the month/day boundary because of
|
|
the uneven number of days in a month.
|
|
|
|
o to_char(INTERVAL '1 hour 5 minutes', 'MI') => 65
|
|
o to_char(INTERVAL '43 hours 20 minutes', 'MI' ) => 2600
|
|
o to_char(INTERVAL '43 hours 20 minutes', 'WK:DD:HR:MI') => 0:1:19:20
|
|
o to_char(INTERVAL '3 years 5 months','MM') => 41
|
|
|
|
* -Add sleep() function, remove from regress.c
|
|
* Allow user-defined functions retuning a domain value to enforce domain
|
|
constraints
|
|
* Add SPI_gettypmod() to return the typemod for a TupleDesc
|
|
|
|
|
|
Multi-Language Support
|
|
======================
|
|
|
|
* Add NCHAR (as distinguished from ordinary varchar),
|
|
* Allow locale to be set at database creation
|
|
|
|
Currently locale can only be set during initdb. No global tables have
|
|
locale-aware columns. However, the database template used during
|
|
database creation might have locale-aware indexes. The indexes would
|
|
need to be reindexed to match the new locale.
|
|
|
|
* Allow encoding on a per-column basis
|
|
|
|
Right now only one encoding is allowed per database.
|
|
|
|
* Support multiple simultaneous character sets, per SQL92
|
|
* Improve UTF8 combined character handling?
|
|
* Add octet_length_server() and octet_length_client()
|
|
* Make octet_length_client() the same as octet_length()?
|
|
* Fix problems with wrong runtime encoding conversion for NLS message files
|
|
|
|
|
|
Views / Rules
|
|
=============
|
|
|
|
* %Automatically create rules on views so they are updateable, per SQL99
|
|
|
|
We can only auto-create rules for simple views. For more complex
|
|
cases users will still have to write rules.
|
|
|
|
* Add the functionality for WITH CHECK OPTION clause of CREATE VIEW
|
|
* Allow NOTIFY in rules involving conditionals
|
|
* Allow VIEW/RULE recompilation when the underlying tables change
|
|
|
|
Another issue is whether underlying table changes should be reflected
|
|
in the view, e.g. should SELECT * show additional columns if they
|
|
are added after the view is created.
|
|
|
|
|
|
SQL Commands
|
|
============
|
|
|
|
* Change LIMIT/OFFSET and FETCH/MOVE to use int8
|
|
* Add CORRESPONDING BY to UNION/INTERSECT/EXCEPT
|
|
* Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY
|
|
* %Allow SET CONSTRAINTS to be qualified by schema/table name
|
|
* %Allow TRUNCATE ... CASCADE/RESTRICT
|
|
|
|
This is like DELETE CASCADE, but truncates.
|
|
|
|
* %Add a separate TRUNCATE permission
|
|
|
|
Currently only the owner can TRUNCATE a table because triggers are not
|
|
called, and the table is locked in exclusive mode.
|
|
|
|
* Allow PREPARE of cursors
|
|
* Allow PREPARE to automatically determine parameter types based on the SQL
|
|
statement
|
|
* Allow finer control over the caching of prepared query plans
|
|
|
|
Currently, queries prepared via the libpq API are planned on first
|
|
execute using the supplied parameters --- allow SQL PREPARE to do the
|
|
same. Also, allow control over replanning prepared queries either
|
|
manually or automatically when statistics for execute parameters
|
|
differ dramatically from those used during planning.
|
|
|
|
* Allow LISTEN/NOTIFY to store info in memory rather than tables?
|
|
|
|
Currently LISTEN/NOTIFY information is stored in pg_listener. Storing
|
|
such information in memory would improve performance.
|
|
|
|
* Add optional textual message to NOTIFY
|
|
|
|
This would allow an informational message to be added to the notify
|
|
message, perhaps indicating the row modified or other custom
|
|
information.
|
|
|
|
* Add a GUC variable to warn about non-standard SQL usage in queries
|
|
* Add SQL-standard MERGE command, typically used to merge two tables
|
|
[merge]
|
|
|
|
This is similar to UPDATE, then for unmatched rows, INSERT.
|
|
Whether concurrent access allows modifications which could cause
|
|
row loss is implementation independent.
|
|
|
|
* Add REPLACE or UPSERT command that does UPDATE, or on failure, INSERT
|
|
[merge]
|
|
|
|
To implement this cleanly requires that the table have a unique index
|
|
so duplicate checking can be easily performed. It is possible to
|
|
do it without a unique index if we require the user to LOCK the table
|
|
before the MERGE.
|
|
|
|
* Add NOVICE output level for helpful messages like automatic sequence/index
|
|
creation
|
|
* -Add COMMENT ON for all cluster global objects (roles, databases
|
|
and tablespaces)
|
|
* -Make row-wise comparisons work per SQL spec
|
|
|
|
Right now, '(a, b) < (1, 2)' is processed as 'a < 1 and b < 2', but
|
|
the SQL standard requires it to be processed as a column-by-column
|
|
comparison, so the proper comparison is '(a < 1) OR (a = 1 AND b < 2)'.
|
|
|
|
* Add RESET CONNECTION command to reset all session state
|
|
|
|
This would include resetting of all variables (RESET ALL), dropping of
|
|
temporary tables, removing any NOTIFYs, cursors, open transactions,
|
|
prepared queries, currval()s, etc. This could be used for connection
|
|
pooling. We could also change RESET ALL to have this functionality.
|
|
The difficult of this features is allowing RESET ALL to not affect
|
|
changes made by the interface driver for its internal use. One idea
|
|
is for this to be a protocol-only feature. Another approach is to
|
|
notify the protocol when a RESET CONNECTION command is used.
|
|
|
|
* Add GUC to issue notice about statements that use unjoined tables
|
|
* Allow EXPLAIN to identify tables that were skipped because of
|
|
constraint_exclusion
|
|
* Allow EXPLAIN output to be more easily processed by scripts
|
|
* Eventually enable escape_string_warning and standard_conforming_strings
|
|
* Simplify dropping roles that have objects in several databases
|
|
* Allow COMMENT ON to accept an expression rather than just a string
|
|
* Allow the count returned by SELECT, etc to be to represent as an int64
|
|
to allow a higher range of values
|
|
* Make CLUSTER preserve recently-dead tuples per MVCC requirements
|
|
* Add SQL99 WITH clause to SELECT
|
|
* Add SQL99 WITH RECURSIVE to SELECT
|
|
|
|
|
|
* CREATE
|
|
|
|
o Allow CREATE TABLE AS to determine column lengths for complex
|
|
expressions like SELECT col1 || col2
|
|
o Use more reliable method for CREATE DATABASE to get a consistent
|
|
copy of db?
|
|
o Add ON COMMIT capability to CREATE TABLE AS ... SELECT
|
|
|
|
|
|
* UPDATE
|
|
o Allow UPDATE to handle complex aggregates [update]?
|
|
o -Allow an alias to be provided for the target table in
|
|
UPDATE/DELETE (Neil)
|
|
o Allow UPDATE tab SET ROW (col, ...) = (...) for updating multiple
|
|
columns
|
|
|
|
|
|
* ALTER
|
|
|
|
o %Have ALTER TABLE RENAME rename SERIAL sequence names
|
|
o Add ALTER DOMAIN to modify the underlying data type
|
|
o %Allow ALTER TABLE ... ALTER CONSTRAINT ... RENAME
|
|
o %Allow ALTER TABLE to change constraint deferrability and actions
|
|
o Add missing object types for ALTER ... SET SCHEMA
|
|
o Allow ALTER TABLESPACE to move to different directories
|
|
o Allow databases to be moved to different tablespaces
|
|
o Allow moving system tables to other tablespaces, where possible
|
|
|
|
Currently non-global system tables must be in the default database
|
|
tablespace. Global system tables can never be moved.
|
|
|
|
o %Disallow dropping of an inherited constraint
|
|
o %Prevent child tables from altering or dropping constraints
|
|
like CHECK that were inherited from the parent table
|
|
o Have ALTER INDEX update the name of a constraint using that index
|
|
o Add ALTER TABLE RENAME CONSTRAINT, update index name also
|
|
|
|
|
|
* CLUSTER
|
|
|
|
o Automatically maintain clustering on a table
|
|
|
|
This might require some background daemon to maintain clustering
|
|
during periods of low usage. It might also require tables to be only
|
|
partially filled for easier reorganization. Another idea would
|
|
be to create a merged heap/index data file so an index lookup would
|
|
automatically access the heap data too. A third idea would be to
|
|
store heap rows in hashed groups, perhaps using a user-supplied
|
|
hash function.
|
|
|
|
o %Add default clustering to system tables
|
|
|
|
To do this, determine the ideal cluster index for each system
|
|
table and set the cluster setting during initdb.
|
|
|
|
|
|
* COPY
|
|
|
|
o Allow COPY to report error lines and continue
|
|
|
|
This requires the use of a savepoint before each COPY line is
|
|
processed, with ROLLBACK on COPY failure.
|
|
|
|
o %Have COPY return the number of rows loaded/unloaded?
|
|
o Allow COPY on a newly-created table to skip WAL logging
|
|
|
|
On crash recovery, the table involved in the COPY would
|
|
be removed or have its heap and index files truncated. One
|
|
issue is that no other backend should be able to add to
|
|
the table at the same time, which is something that is
|
|
currently allowed.
|
|
|
|
o Allow COPY to output from views
|
|
|
|
Another idea would be to allow actual SELECT statements in a COPY.
|
|
|
|
|
|
* GRANT/REVOKE
|
|
|
|
o Allow column-level privileges
|
|
o %Allow GRANT/REVOKE permissions to be applied to all schema objects
|
|
with one command
|
|
|
|
The proposed syntax is:
|
|
GRANT SELECT ON ALL TABLES IN public TO phpuser;
|
|
GRANT SELECT ON NEW TABLES IN public TO phpuser;
|
|
|
|
* Allow GRANT/REVOKE permissions to be inherited by objects based on
|
|
schema permissions
|
|
|
|
* Allow SERIAL sequences to inherit permissions from the base table?
|
|
|
|
|
|
* CURSOR
|
|
|
|
o Allow UPDATE/DELETE WHERE CURRENT OF cursor
|
|
|
|
This requires using the row ctid to map cursor rows back to the
|
|
original heap row. This become more complicated if WITH HOLD cursors
|
|
are to be supported because WITH HOLD cursors have a copy of the row
|
|
and no FOR UPDATE lock.
|
|
|
|
o Prevent DROP TABLE from dropping a row referenced by its own open
|
|
cursor?
|
|
|
|
o -Allow pooled connections to list all open WITH HOLD cursors
|
|
|
|
Because WITH HOLD cursors exist outside transactions, this allows
|
|
them to be listed so they can be closed.
|
|
|
|
|
|
* INSERT
|
|
|
|
o Allow INSERT/UPDATE of the system-generated oid value for a row
|
|
o Allow INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..)
|
|
o Allow INSERT/UPDATE ... RETURNING new.col or old.col
|
|
|
|
This is useful for returning the auto-generated key for an INSERT.
|
|
One complication is how to handle rules that run as part of
|
|
the insert.
|
|
|
|
|
|
* SHOW/SET
|
|
|
|
o Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM
|
|
ANALYZE, and CLUSTER
|
|
o Add SET PATH for schemas?
|
|
|
|
This is basically the same as SET search_path.
|
|
|
|
|
|
* Server-Side Languages
|
|
|
|
o Fix PL/pgSQL RENAME to work on variables other than OLD/NEW
|
|
o Allow function parameters to be passed by name,
|
|
get_employee_salary(emp_id => 12345, tax_year => 2001)
|
|
o Add Oracle-style packages
|
|
o Add table function support to pltcl, plpython
|
|
o Add capability to create and call PROCEDURES
|
|
o Allow PL/pgSQL to handle %TYPE arrays, e.g. tab.col%TYPE[]
|
|
o Allow function argument names to be statements from PL/PgSQL
|
|
o Add MOVE to PL/pgSQL
|
|
o Add support for polymorphic arguments and return types to
|
|
languages other than PL/PgSQL
|
|
o Add support for OUT and INOUT parameters to languages other
|
|
than PL/PgSQL
|
|
o Add single-step debugging of PL/PgSQL functions
|
|
o Allow PL/PgSQL to support WITH HOLD cursors
|
|
|
|
|
|
Clients
|
|
=======
|
|
|
|
* -Have initdb set the input DateStyle (MDY or DMY) based on locale
|
|
* Have pg_ctl look at PGHOST in case it is a socket directory?
|
|
* Allow pg_ctl to work properly with configuration files located outside
|
|
the PGDATA directory
|
|
|
|
pg_ctl can not read the pid file because it isn't located in the
|
|
config directory but in the PGDATA directory. The solution is to
|
|
allow pg_ctl to read and understand postgresql.conf to find the
|
|
data_directory value.
|
|
|
|
|
|
* psql
|
|
|
|
o Have psql show current values for a sequence
|
|
o Move psql backslash database information into the backend, use
|
|
mnemonic commands? [psql]
|
|
|
|
This would allow non-psql clients to pull the same information out
|
|
of the database as psql.
|
|
|
|
o Fix psql's display of schema information (Neil)
|
|
o Allow psql \pset boolean variables to set to fixed values, rather
|
|
than toggle
|
|
o Consistently display privilege information for all objects in psql
|
|
o -Improve psql's handling of multi-line statements
|
|
|
|
Currently, while \e saves a single statement as one entry, interactive
|
|
statements are saved one line at a time. Ideally all statements
|
|
would be saved like \e does.
|
|
|
|
o -Allow multi-line column values to align in the proper columns
|
|
|
|
If the second output column value is 'a\nb', the 'b' should appear
|
|
in the second display column, rather than the first column as it
|
|
does now.
|
|
|
|
o Display IN, INOUT, and OUT parameters in \df+
|
|
|
|
It probably requires psql to output newlines in the proper
|
|
column, which is already on the TODO list.
|
|
|
|
o Add auto-expanded mode so expanded output is used if the row
|
|
length is wider than the screen width.
|
|
|
|
Consider using auto-expanded mode for backslash commands like \df+.
|
|
|
|
o Prevent tab completion of SET TRANSACTION from querying the
|
|
database and therefore preventing the transaction isolation
|
|
level from being set.
|
|
|
|
Currently, SET <tab> causes a database lookup to check all
|
|
supported session variables. This query causes problems
|
|
because setting the transaction isolation level must be the
|
|
first statement of a transaction.
|
|
|
|
|
|
* pg_dump
|
|
|
|
o %Have pg_dump use multi-statement transactions for INSERT dumps
|
|
o %Allow pg_dump to use multiple -t and -n switches [pg_dump]
|
|
o %Add dumping of comments on index columns and composite type columns
|
|
o %Add full object name to the tag field. eg. for operators we need
|
|
'=(integer, integer)', instead of just '='.
|
|
o Add pg_dumpall custom format dumps?
|
|
o %Add CSV output format
|
|
o Update pg_dump and psql to use the new COPY libpq API (Christopher)
|
|
o Remove unnecessary function pointer abstractions in pg_dump source
|
|
code
|
|
o Allow selection of individual object(s) of all types, not just
|
|
tables
|
|
o In a selective dump, allow dumping of an object and all its
|
|
dependencies
|
|
o Add options like pg_restore -l and -L to pg_dump
|
|
o Stop dumping CASCADE on DROP TYPE commands in clean mode
|
|
o Allow pg_dump --clean to drop roles that own objects or have
|
|
privileges
|
|
o Add -f to pg_dumpall
|
|
|
|
|
|
* ecpg
|
|
|
|
o Docs
|
|
|
|
Document differences between ecpg and the SQL standard and
|
|
information about the Informix-compatibility module.
|
|
|
|
o Solve cardinality > 1 for input descriptors / variables?
|
|
o Add a semantic check level, e.g. check if a table really exists
|
|
o fix handling of DB attributes that are arrays
|
|
o Use backend PREPARE/EXECUTE facility for ecpg where possible
|
|
o Implement SQLDA
|
|
o Fix nested C comments
|
|
o %sqlwarn[6] should be 'W' if the PRECISION or SCALE value specified
|
|
o Make SET CONNECTION thread-aware, non-standard?
|
|
o Allow multidimensional arrays
|
|
o Add internationalized message strings
|
|
|
|
|
|
libpq
|
|
|
|
o Add a function to support Parse/DescribeStatement capability
|
|
o Add PQescapeIdentifier()
|
|
o Prevent PQfnumber() from lowercasing unquoted the column name
|
|
|
|
PQfnumber() should never have been doing lowercasing, but
|
|
historically it has so we need a way to prevent it
|
|
|
|
o Allow statement results to be automatically batched to the client
|
|
|
|
Currently, all statement results are transferred to the libpq
|
|
client before libpq makes the results available to the
|
|
application. This feature would allow the application to make
|
|
use of the first result rows while the rest are transferred, or
|
|
held on the server waiting for them to be requested by libpq.
|
|
One complexity is that a statement like SELECT 1/col could error
|
|
out mid-way through the result set.
|
|
|
|
|
|
Referential Integrity
|
|
=====================
|
|
|
|
* Add MATCH PARTIAL referential integrity
|
|
* Add deferred trigger queue file
|
|
|
|
Right now all deferred trigger information is stored in backend
|
|
memory. This could exhaust memory for very large trigger queues.
|
|
This item involves dumping large queues into files.
|
|
|
|
* Change foreign key constraint for array -> element to mean element
|
|
in array?
|
|
* Allow DEFERRABLE UNIQUE constraints?
|
|
* Allow triggers to be disabled in only the current session.
|
|
|
|
This is currently possible by starting a multi-statement transaction,
|
|
modifying the system tables, performing the desired SQL, restoring the
|
|
system tables, and committing the transaction. ALTER TABLE ...
|
|
TRIGGER requires a table lock so it is not ideal for this usage.
|
|
|
|
* With disabled triggers, allow pg_dump to use ALTER TABLE ADD FOREIGN KEY
|
|
|
|
If the dump is known to be valid, allow foreign keys to be added
|
|
without revalidating the data.
|
|
|
|
* Allow statement-level triggers to access modified rows
|
|
* Support triggers on columns (Greg Sabino Mullane)
|
|
* Enforce referential integrity for system tables
|
|
* Allow AFTER triggers on system tables
|
|
|
|
System tables are modified in many places in the backend without going
|
|
through the executor and therefore not causing triggers to fire. To
|
|
complete this item, the functions that modify system tables will have
|
|
to fire triggers.
|
|
|
|
|
|
Dependency Checking
|
|
===================
|
|
|
|
* Flush cached query plans when the dependent objects change,
|
|
when the cardinality of parameters changes dramatically, or
|
|
when new ANALYZE statistics are available
|
|
|
|
A more complex solution would be to save multiple plans for different
|
|
cardinality and use the appropriate plan based on the EXECUTE values.
|
|
|
|
* Track dependencies in function bodies and recompile/invalidate
|
|
|
|
This is particularly important for references to temporary tables
|
|
in PL/PgSQL because PL/PgSQL caches query plans. The only workaround
|
|
in PL/PgSQL is to use EXECUTE. One complexity is that a function
|
|
might itself drop and recreate dependent tables, causing it to
|
|
invalidate its own query plan.
|
|
|
|
|
|
Exotic Features
|
|
===============
|
|
|
|
* Add pre-parsing phase that converts non-ISO syntax to supported
|
|
syntax
|
|
|
|
This could allow SQL written for other databases to run without
|
|
modification.
|
|
|
|
* Allow plug-in modules to emulate features from other databases
|
|
* SQL*Net listener that makes PostgreSQL appear as an Oracle database
|
|
to clients
|
|
* Allow statements across databases or servers with transaction
|
|
semantics
|
|
|
|
This can be done using dblink and two-phase commit.
|
|
|
|
* Add the features of packages
|
|
|
|
o Make private objects accessible only to objects in the same schema
|
|
o Allow current_schema.objname to access current schema objects
|
|
o Add session variables
|
|
o Allow nested schemas
|
|
|
|
|
|
Indexes
|
|
=======
|
|
|
|
* Allow inherited tables to inherit index, UNIQUE constraint, and primary
|
|
key, foreign key
|
|
* UNIQUE INDEX on base column not honored on INSERTs/UPDATEs from
|
|
inherited table: INSERT INTO inherit_table (unique_index_col) VALUES
|
|
(dup) should fail
|
|
|
|
The main difficulty with this item is the problem of creating an index
|
|
that can span more than one table.
|
|
|
|
* Allow SELECT ... FOR UPDATE on inherited tables
|
|
* Add UNIQUE capability to non-btree indexes
|
|
* Prevent index uniqueness checks when UPDATE does not modify the column
|
|
|
|
Uniqueness (index) checks are done when updating a column even if the
|
|
column is not modified by the UPDATE.
|
|
|
|
* Allow the creation of on-disk bitmap indexes which can be quickly
|
|
combined with other bitmap indexes
|
|
|
|
Such indexes could be more compact if there are only a few distinct values.
|
|
Such indexes can also be compressed. Keeping such indexes updated can be
|
|
costly.
|
|
|
|
* Allow use of indexes to search for NULLs
|
|
|
|
One solution is to create a partial index on an IS NULL expression.
|
|
|
|
* Allow accurate statistics to be collected on indexes with more than
|
|
one column or expression indexes, perhaps using per-index statistics
|
|
* Add fillfactor to control reserved free space during index creation
|
|
* Allow the creation of indexes with mixed ascending/descending specifiers
|
|
* Allow constraint_exclusion to work for UNIONs like it does for
|
|
inheritance, allow it to work for UPDATE and DELETE statements, and allow
|
|
it to be used for all statements with little performance impact
|
|
* Allow CREATE INDEX to take an additional parameter for use with
|
|
special index types
|
|
* Consider compressing indexes by storing key values duplicated in
|
|
several rows as a single index entry
|
|
|
|
This is difficult because it requires datatype-specific knowledge.
|
|
|
|
|
|
* GIST
|
|
|
|
o Add more GIST index support for geometric data types
|
|
o Allow GIST indexes to create certain complex index types, like
|
|
digital trees (see Aoki)
|
|
|
|
* Hash
|
|
|
|
o Pack hash index buckets onto disk pages more efficiently
|
|
|
|
Currently only one hash bucket can be stored on a page. Ideally
|
|
several hash buckets could be stored on a single page and greater
|
|
granularity used for the hash algorithm.
|
|
|
|
o Consider sorting hash buckets so entries can be found using a
|
|
binary search, rather than a linear scan
|
|
|
|
o In hash indexes, consider storing the hash value with or instead
|
|
of the key itself
|
|
|
|
o Add WAL logging for crash recovery
|
|
o Allow multi-column hash indexes
|
|
|
|
|
|
Fsync
|
|
=====
|
|
|
|
* Improve commit_delay handling to reduce fsync()
|
|
* Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options
|
|
|
|
Ideally this requires a separate test program that can be run
|
|
at initdb time or optionally later. Consider O_SYNC when
|
|
O_DIRECT exists.
|
|
|
|
* %Add an option to sync() before fsync()'ing checkpoint files
|
|
* Add program to test if fsync has a delay compared to non-fsync
|
|
|
|
|
|
Cache Usage
|
|
===========
|
|
|
|
* Allow free-behind capability for large sequential scans, perhaps using
|
|
posix_fadvise()
|
|
|
|
Posix_fadvise() can control both sequential/random file caching and
|
|
free-behind behavior, but it is unclear how the setting affects other
|
|
backends that also have the file open, and the feature is not supported
|
|
on all operating systems.
|
|
|
|
* Speed up COUNT(*)
|
|
|
|
We could use a fixed row count and a +/- count to follow MVCC
|
|
visibility rules, or a single cached value could be used and
|
|
invalidated if anyone modifies the table. Another idea is to
|
|
get a count directly from a unique index, but for this to be
|
|
faster than a sequential scan it must avoid access to the heap
|
|
to obtain tuple visibility information.
|
|
|
|
* Add estimated_count(*) to return an estimate of COUNT(*)
|
|
|
|
This would use the planner ANALYZE statistics to return an estimated
|
|
count.
|
|
|
|
* Allow data to be pulled directly from indexes
|
|
|
|
Currently indexes do not have enough tuple visibility information
|
|
to allow data to be pulled from the index without also accessing
|
|
the heap. One way to allow this is to set a bit on index tuples
|
|
to indicate if a tuple is currently visible to all transactions
|
|
when the first valid heap lookup happens. This bit would have to
|
|
be cleared when a heap tuple is expired.
|
|
|
|
Another idea is to maintain a bitmap of heap pages where all rows
|
|
are visible to all backends, and allow index lookups to reference
|
|
that bitmap to avoid heap lookups, perhaps the same bitmap we might
|
|
add someday to determine which heap pages need vacuuming. Frequently
|
|
accessed bitmaps would have to be stored in shared memory. One 8k
|
|
page of bitmaps could track 512MB of heap pages.
|
|
|
|
* Consider automatic caching of statements at various levels:
|
|
|
|
o Parsed query tree
|
|
o Query execute plan
|
|
o Query results
|
|
|
|
* Allow sequential scans to take advantage of other concurrent
|
|
sequential scans, also called "Synchronised Scanning"
|
|
|
|
One possible implementation is to start sequential scans from the lowest
|
|
numbered buffer in the shared cache, and when reaching the end wrap
|
|
around to the beginning, rather than always starting sequential scans
|
|
at the start of the table.
|
|
|
|
|
|
Vacuum
|
|
======
|
|
|
|
* Improve speed with indexes
|
|
|
|
For large table adjustments during VACUUM FULL, it is faster to
|
|
reindex rather than update the index.
|
|
|
|
* Reduce lock time during VACUUM FULL by moving tuples with read lock,
|
|
then write lock and truncate table
|
|
|
|
Moved tuples are invisible to other backends so they don't require a
|
|
write lock. However, the read lock promotion to write lock could lead
|
|
to deadlock situations.
|
|
|
|
* Auto-fill the free space map by scanning the buffer cache or by
|
|
checking pages written by the background writer
|
|
* Create a bitmap of pages that need vacuuming
|
|
|
|
Instead of sequentially scanning the entire table, have the background
|
|
writer or some other process record pages that have expired rows, then
|
|
VACUUM can look at just those pages rather than the entire table. In
|
|
the event of a system crash, the bitmap would probably be invalidated.
|
|
One complexity is that index entries still have to be vacuumed, and
|
|
doing this without an index scan (by using the heap values to find the
|
|
index entry) might be slow and unreliable, especially for user-defined
|
|
index functions.
|
|
|
|
* -Add system view to show free space map contents
|
|
|
|
|
|
* Auto-vacuum
|
|
|
|
o Use free-space map information to guide refilling
|
|
o %Issue log message to suggest VACUUM FULL if a table is nearly
|
|
empty?
|
|
o Improve xid wraparound detection by recording per-table rather
|
|
than per-database
|
|
o Consider logging activity either to the logs or a system view
|
|
|
|
|
|
Locking
|
|
=======
|
|
|
|
* Fix priority ordering of read and write light-weight locks (Neil)
|
|
|
|
|
|
Startup Time Improvements
|
|
=========================
|
|
|
|
* Experiment with multi-threaded backend [thread]
|
|
|
|
This would prevent the overhead associated with process creation. Most
|
|
operating systems have trivial process creation time compared to
|
|
database startup overhead, but a few operating systems (Win32,
|
|
Solaris) might benefit from threading. Also explore the idea of
|
|
a single session using multiple threads to execute a statement faster.
|
|
|
|
* Add connection pooling
|
|
|
|
It is unclear if this should be done inside the backend code or done
|
|
by something external like pgpool. The passing of file descriptors to
|
|
existing backends is one of the difficulties with a backend approach.
|
|
|
|
|
|
Write-Ahead Log
|
|
===============
|
|
|
|
* Eliminate need to write full pages to WAL before page modification [wal]
|
|
|
|
Currently, to protect against partial disk page writes, we write
|
|
full page images to WAL before they are modified so we can correct any
|
|
partial page writes during recovery. These pages can also be
|
|
eliminated from point-in-time archive files.
|
|
|
|
o When off, write CRC to WAL and check file system blocks
|
|
on recovery
|
|
|
|
If CRC check fails during recovery, remember the page in case
|
|
a later CRC for that page properly matches.
|
|
|
|
o Write full pages during file system write and not when
|
|
the page is modified in the buffer cache
|
|
|
|
This allows most full page writes to happen in the background
|
|
writer. It might cause problems for applying WAL on recovery
|
|
into a partially-written page, but later the full page will be
|
|
replaced from WAL.
|
|
|
|
* Allow WAL traffic to be streamed to another server for stand-by
|
|
replication
|
|
* Reduce WAL traffic so only modified values are written rather than
|
|
entire rows?
|
|
* Allow the pg_xlog directory location to be specified during initdb
|
|
with a symlink back to the /data location
|
|
* Allow WAL information to recover corrupted pg_controldata
|
|
* Find a way to reduce rotational delay when repeatedly writing
|
|
last WAL page
|
|
|
|
Currently fsync of WAL requires the disk platter to perform a full
|
|
rotation to fsync again. One idea is to write the WAL to different
|
|
offsets that might reduce the rotational delay.
|
|
|
|
* Allow buffered WAL writes and fsync
|
|
|
|
Instead of guaranteeing recovery of all committed transactions, this
|
|
would provide improved performance by delaying WAL writes and fsync
|
|
so an abrupt operating system restart might lose a few seconds of
|
|
committed transactions but still be consistent. We could perhaps
|
|
remove the 'fsync' parameter (which results in an an inconsistent
|
|
database) in favor of this capability.
|
|
|
|
* Allow WAL logging to be turned off for a table, but the table
|
|
might be dropped or truncated during crash recovery [walcontrol]
|
|
|
|
Allow tables to bypass WAL writes and just fsync() dirty pages on
|
|
commit. This should be implemented using ALTER TABLE, e.g. ALTER
|
|
TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ]. Tables using
|
|
non-default logging should not use referential integrity with
|
|
default-logging tables. A table without dirty buffers during a
|
|
crash could perhaps avoid the drop/truncate.
|
|
|
|
* Allow WAL logging to be turned off for a table, but the table would
|
|
avoid being truncated/dropped [walcontrol]
|
|
|
|
To do this, only a single writer can modify the table, and writes
|
|
must happen only on new pages so the new pages can be removed during
|
|
crash recovery. Readers can continue accessing the table. Such
|
|
tables probably cannot have indexes. One complexity is the handling
|
|
of indexes on TOAST tables.
|
|
|
|
|
|
Optimizer / Executor
|
|
====================
|
|
|
|
* Improve selectivity functions for geometric operators
|
|
* Allow ORDER BY ... LIMIT # to select high/low value without sort or
|
|
index using a sequential scan for highest/lowest values
|
|
|
|
Right now, if no index exists, ORDER BY ... LIMIT # requires we sort
|
|
all values to return the high/low value. Instead The idea is to do a
|
|
sequential scan to find the high/low value, thus avoiding the sort.
|
|
MIN/MAX already does this, but not for LIMIT > 1.
|
|
|
|
* Precompile SQL functions to avoid overhead
|
|
* Create utility to compute accurate random_page_cost value
|
|
* Improve ability to display optimizer analysis using OPTIMIZER_DEBUG
|
|
* Have EXPLAIN ANALYZE highlight poor optimizer estimates
|
|
* Consider using hash buckets to do DISTINCT, rather than sorting
|
|
|
|
This would be beneficial when there are few distinct values. This is
|
|
already used by GROUP BY.
|
|
|
|
* Log statements where the optimizer row estimates were dramatically
|
|
different from the number of rows actually found?
|
|
|
|
|
|
Miscellaneous Performance
|
|
=========================
|
|
|
|
* Do async I/O for faster random read-ahead of data
|
|
|
|
Async I/O allows multiple I/O requests to be sent to the disk with
|
|
results coming back asynchronously.
|
|
|
|
* Use mmap() rather than SYSV shared memory or to write WAL files?
|
|
|
|
This would remove the requirement for SYSV SHM but would introduce
|
|
portability issues. Anonymous mmap (or mmap to /dev/zero) is required
|
|
to prevent I/O overhead.
|
|
|
|
* Consider mmap()'ing files into a backend?
|
|
|
|
Doing I/O to large tables would consume a lot of address space or
|
|
require frequent mapping/unmapping. Extending the file also causes
|
|
mapping problems that might require mapping only individual pages,
|
|
leading to thousands of mappings. Another problem is that there is no
|
|
way to _prevent_ I/O to disk from the dirty shared buffers so changes
|
|
could hit disk before WAL is written.
|
|
|
|
* Add a script to ask system configuration questions and tune postgresql.conf
|
|
* Merge xmin/xmax/cmin/cmax back into three header fields
|
|
|
|
Before subtransactions, there used to be only three fields needed to
|
|
store these four values. This was possible because only the current
|
|
transaction looks at the cmin/cmax values. If the current transaction
|
|
created and expired the row the fields stored where xmin (same as
|
|
xmax), cmin, cmax, and if the transaction was expiring a row from a
|
|
another transaction, the fields stored were xmin (cmin was not
|
|
needed), xmax, and cmax. Such a system worked because a transaction
|
|
could only see rows from another completed transaction. However,
|
|
subtransactions can see rows from outer transactions, and once the
|
|
subtransaction completes, the outer transaction continues, requiring
|
|
the storage of all four fields. With subtransactions, an outer
|
|
transaction can create a row, a subtransaction expire it, and when the
|
|
subtransaction completes, the outer transaction still has to have
|
|
proper visibility of the row's cmin, for example, for cursors.
|
|
|
|
One possible solution is to create a phantom cid which represents a
|
|
cmin/cmax pair and is stored in local memory. Another idea is to
|
|
store both cmin and cmax only in local memory.
|
|
|
|
* Research storing disk pages with no alignment/padding
|
|
|
|
|
|
Source Code
|
|
===========
|
|
|
|
* Add use of 'const' for variables in source tree
|
|
* Rename some /contrib modules from pg* to pg_*
|
|
* Move some things from /contrib into main tree
|
|
* Move some /contrib modules out to their own project sites
|
|
* %Remove warnings created by -Wcast-align
|
|
* Move platform-specific ps status display info from ps_status.c to ports
|
|
* Add optional CRC checksum to heap and index pages
|
|
* Improve documentation to build only interfaces (Marc)
|
|
* Remove or relicense modules that are not under the BSD license, if possible
|
|
* %Remove memory/file descriptor freeing before ereport(ERROR)
|
|
* Acquire lock on a relation before building a relcache entry for it
|
|
* %Promote debug_query_string into a server-side function current_query()
|
|
* %Allow the identifier length to be increased via a configure option
|
|
* Allow cross-compiling by generating the zic database on the target system
|
|
* Improve NLS maintenace of libpgport messages linked onto applications
|
|
* Allow ecpg to work with MSVC and BCC
|
|
* Add xpath_array() to /contrib/xml2 to return results as an array
|
|
* Allow building in directories containing spaces
|
|
|
|
This is probably not possible because 'gmake' and other compiler tools
|
|
do not fully support quoting of paths with spaces.
|
|
|
|
* -Allow installing to directories containing spaces
|
|
|
|
This is possible if proper quoting is added to the makefiles for the
|
|
install targets. Because PostgreSQL supports relocatable installs, it
|
|
is already possible to install into a directory that doesn't contain
|
|
spaces and then copy the install to a directory with spaces.
|
|
|
|
* Fix sgmltools so PDFs can be generated with bookmarks
|
|
* %Clean up compiler warnings (especially with gcc version 4)
|
|
* Use UTF8 encoding for NLS messages so all server encodings can
|
|
read them properly
|
|
* Update Bonjour to work with newer cross-platform SDK
|
|
* -Remove BeOS and QNX-specific code
|
|
|
|
|
|
* Win32
|
|
|
|
o Remove configure.in check for link failure when cause is found
|
|
o Remove readdir() errno patch when runtime/mingwex/dirent.c rev
|
|
1.4 is released
|
|
o Remove psql newline patch when we find out why mingw outputs an
|
|
extra newline
|
|
o Allow psql to use readline once non-US code pages work with
|
|
backslashes
|
|
o Re-enable timezone output on log_line_prefix '%t' when a
|
|
shorter timezone string is available
|
|
o Fix problem with shared memory on the Win32 Terminal Server
|
|
o Improve signal handling,
|
|
http://archives.postgresql.org/pgsql-patches/2005-06/msg00027.php
|
|
o Add long file support for binary pg_dump output
|
|
|
|
While Win32 supports 64-bit files, the MinGW API does not,
|
|
meaning we have to build an fseeko replacement on top of the
|
|
Win32 API, and we have to make sure MinGW handles it. Another
|
|
option is to wait for the MinGW project to fix it, or use the
|
|
code from the LibGW32C project as a guide.
|
|
|
|
|
|
* Wire Protocol Changes
|
|
|
|
o Allow dynamic character set handling
|
|
o Add decoded type, length, precision
|
|
o Use compression?
|
|
o Update clients to use data types, typmod, schema.table.column names
|
|
of result sets using new statement protocol
|
|
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
|
|
Developers who have claimed items are:
|
|
--------------------------------------
|
|
* Alvaro is Alvaro Herrera <alvherre@dcc.uchile.cl>
|
|
* Andrew is Andrew Dunstan <andrew@dunslane.net>
|
|
* Bruce is Bruce Momjian <pgman@candle.pha.pa.us> of Software Research Assoc.
|
|
* Christopher is Christopher Kings-Lynne <chriskl@familyhealth.com.au> of
|
|
Family Health Network
|
|
* D'Arcy is D'Arcy J.M. Cain <darcy@druid.net> of The Cain Gang Ltd.
|
|
* Fabien is Fabien Coelho <coelho@cri.ensmp.fr>
|
|
* Gavin is Gavin Sherry <swm@linuxworld.com.au> of Alcove Systems Engineering
|
|
* Greg is Greg Sabino Mullane <greg@turnstep.com>
|
|
* Jan is Jan Wieck <JanWieck@Yahoo.com> of Afilias, Inc.
|
|
* Joe is Joe Conway <mail@joeconway.com>
|
|
* Karel is Karel Zak <zakkr@zf.jcu.cz>
|
|
* Magnus is Magnus Hagander <mha@sollentuna.net>
|
|
* Marc is Marc Fournier <scrappy@hub.org> of PostgreSQL, Inc.
|
|
* Matthew T. O'Connor <matthew@zeut.net>
|
|
* Michael is Michael Meskes <meskes@postgresql.org> of Credativ
|
|
* Neil is Neil Conway <neilc@samurai.com>
|
|
* Oleg is Oleg Bartunov <oleg@sai.msu.su>
|
|
* Peter is Peter Eisentraut <peter_e@gmx.net>
|
|
* Philip is Philip Warner <pjw@rhyme.com.au> of Albatross Consulting Pty. Ltd.
|
|
* Rod is Rod Taylor <pg@rbt.ca>
|
|
* Simon is Simon Riggs <simon@2ndquadrant.com>
|
|
* Stephan is Stephan Szabo <sszabo@megazone23.bigpanda.com>
|
|
* Tatsuo is Tatsuo Ishii <t-ishii@sra.co.jp> of Software Research Assoc.
|
|
* Teodor is Teodor Sigaev <teodor@sigaev.ru>
|
|
* Tom is Tom Lane <tgl@sss.pgh.pa.us> of Red Hat
|