< * Experiment with multi-threaded backend better resource utilization
<
< This would allow a single query to make use of multiple CPU's or
< multiple I/O channels simultaneously. One idea is to create a
< background reader that can pre-fetch sequential and index scan
< pages needed by other backends. This could be expanded to allow
< concurrent reads from multiple devices in a partitioned table.
<
> * Experiment with multi-threaded backend better resource utilization
>
> This would allow a single query to make use of multiple CPU's or
> multiple I/O channels simultaneously. One idea is to create a
> background reader that can pre-fetch sequential and index scan
> pages needed by other backends. This could be expanded to allow
> concurrent reads from multiple devices in a partitioned table.
* Consider having the background writer update the transaction status
hint bits before writing out the page
Implementing this requires the background writer to have access to system
catalogs and the transaction status log.
<
< * Allow free-behind capability for large sequential scans to avoid
< kernel cache spoiling
<
< Posix_fadvise() can control both sequential/random file caching and
< free-behind behavior, but it is unclear how the setting affects other
< backends that also have the file open, and the feature is not supported
< on all operating systems.
useful and confuses people who think it is the same as -U. (Eventually
we might want to re-introduce it as being an alias for -U, but that should
not happen until the switch has actually not been there for a few releases.)
Likewise in pg_dump and pg_restore. Per gripe from Robert Treat and
subsequent discussion.
with the logged event. CSV logs are now a first-class citizen along plain
text logs in that they carry much of the same information.
Per complaint from depesz on bug #3799.
hazards. Instead teach these programs to prompt for a password when
necessary, just like all our other programs.
I did not bother to invent -W switches for them, since the return on
investment seems so low.
PQconnectionNeedsPassword function that tells the right thing for whether to
prompt for a password, and improve PQconnectionUsedPassword so that it checks
whether the password used by the connection was actually supplied as a
connection argument, instead of coming from environment or a password file.
Per bug report from Mark Cave-Ayland and subsequent discussion.
< o -Allow commenting of variables in postgresql.conf to restore them
< to defaults
< o -Add a GUC variable to control the tablespace for temporary objects
< and sort files
< Monitoring
< ==========
<
< * -Allow server log information to be output as CSV format
< * -Add ability to monitor the use of temporary sort files
< * -Allow user-defined types to accept 'typmod' parameters
<
< http://archives.postgresql.org/pgsql-hackers/2005-08/msg01142.php
< http://archives.postgresql.org/pgsql-hackers/2005-09/msg00012.php
< http://archives.postgresql.org/pgsql-hackers/2006-08/msg00149.php
<
< * -Add Globally/Universally Unique Identifier (GUID/UUID)
<
< http://archives.postgresql.org/pgsql-patches/2006-09/msg00209.php
< http://archives.postgresql.org/pgsql-general/2007-01/msg00853.php
<
< * -Support a data type with specific enumerated values (ENUM)
< o -Add support for arrays of complex types
< o -Make 64-bit version of the MONEY data type
< * -Add ISO day of week format 'ID' to to_char() where Monday = 1
< * -Add a field 'isoyear' to extract(), based on the ISO week
< * -Add RESET SESSION command to reset all session state
< o -Make CLUSTER preserve recently-dead tuples per MVCC requirements
< o -Add more logical syntax CLUSTER table USING index;
< support current syntax for backward compatibility
< o -Allow UPDATE/DELETE WHERE CURRENT OF cursor
< o -Add support for MOVE cursors
< o -Allow PL/PythonU to return boolean rather than 1/0
< o -Allow psql \pset boolean variables to set to fixed values, rather
< than toggle
< o -Add -f to pg_dumpall
< Dependency Checking
< ===================
<
< * -Flush cached query plans when the dependent objects change or
< when new ANALYZE statistics are available
< * -Track dependencies in function bodies and recompile/invalidate
< * -Invalidate prepared queries, like INSERT, when the table definition
< is altered
<
< * -Allow use of indexes to search for NULLs
< * -Allow the creation of indexes with mixed ascending/descending
< specifiers
< * -Reduce checkpoint performance degredation by forcing data to disk
< more evenly
< * -Allow sequential scans to take advantage of other concurrent
< sequential scans, also called "Synchronised Scanning"
< * -Consider shrinking expired tuples to just their headers
< * -Allow heap reuse of UPDATEd rows if no indexed columns are changed,
< and old and new versions are on the same heap page
< * -Reduce XID consumption of read-only queries
< o -Turn on by default
< o -Allow multiple vacuums so large tables do not starve small
< tables
< * -Allow the pg_xlog directory location to be specified during initdb
< with a symlink back to the /data location
< * -Allow buffered WAL writes and fsync
< * -Allow ORDER BY ... LIMIT # to select high/low value without sort or
< index using a sequential scan for highest/lowest values
< * -Merge xmin/xmax/cmin/cmax back into three header fields
< o -Support a smaller header for short variable-length fields
< * -Move NAMEDATALEN from postgres_ext.h to pg_config_manual.h
< * -Fix problem with excessive logging during SSL disconnection
<
< http://archives.postgresql.org/pgsql-bugs/2006-12/msg00122.php
< http://archives.postgresql.org/pgsql-bugs/2007-05/msg00065.php
<
< o -Add long file support for binary pg_dump output
to ensure that the resulting webpages have predictable URLs, instead of
ever-changing numeric IDs. The new contrib docs were the biggest
offender, but some old stuff had the problem too. Also, rename a couple
of new contrib sgml files for consistency's sake.
useful consequence of the former liberal implicit casting to text;
namely that you can feed non-string values to quote_literal() and get
unsurprising results. Per discussion.
to a UNION, CASE, or related construct are of the same domain type. The
main part of this routine smashes domains to their base types, which seems
necessary because the logic involves TypeCategory() and IsPreferredType(),
neither of which work usefully on domains. However, we can add a first
pass that just detects whether all the inputs are exactly the same type,
and if so accept that without question (so long as it's not UNKNOWN).
Per recent gripe from Dean Rasheed.
In passing, remove some tests for InvalidOid, which have clearly been dead
code for quite some time now, because getBaseType() would fail on that input.
Also, clarify the manual's not-very-precise description of the existing
algorithm's behavior.
< * Prevent long-lived temporary tables from causing frozen-Xid advancement
> * Prevent long-lived temporary tables from causing frozen-xid advancement
>
> The problem is that autovacuum cannot vacuum them to set frozen xids;
> only the session that created them can do that.
>
>
>
Allow tag and entity names that follow XML rules. Provide for hexadecimal
as well as decimal numeric entities. Adjust code names to coincide with
new descriptions.
< o Prevent COMMENT ON dbname from issuing a warning when loading
< into a database with a different name, perhaps using COMMENT ON
< CURRENT DATABASE
> o Change pg_dump so that a comment on the dumped database is
> applied to the loaded database, even if the database has a
> different name. This will require new backend syntax, perhaps
> COMMENT ON CURRENT DATABASE.
< o Allow COMMENT ON dbname to work when loading into a database
< with a different name, perhaps using COMMENT ON CURRENT
< DATABASE
> o Prevent COMMENT ON dbname from issuing a warning when loading
> into a database with a different name, perhaps using COMMENT ON
> CURRENT DATABASE
of this seems a bit marginal, if it's useful enough to be shown in the manual
then we probably ought to support doing it without double evaluation of the
ts_rank function. Per my proposal earlier today.
gives the old behavior; selecting false allows the dictionary to be used
as a filter ahead of other dictionaries, because it will pass on rather
than accept words that aren't in its stopword list.
Jan Urbanski
remove transactions
use create or replace function
make formatting consistent
set search patch on first line
Add documentation on modifying *.sql to set the search patch, and
mention that major upgrades should still run the installation scripts.
Some of these issues were spotted by Tom today.
Throw an error for actual stop words, rather than a warning. This fixes
problems with cache reloading causing warning messages.
Re-enable stop words in regression tests; was disabled by Tom.
Document "?" as API change.
to validate the realm of the connecting user. By default
it's empty meaning no verification, which is the way
Kerberos authentication has traditionally worked in
PostgreSQL.
per recommendation from Alvaro. This doesn't force initdb since the
numeric token type in the catalogs doesn't change; but note that
the expected regression test output changed.
the sequence. Also, make setval() with is_called = false not affect the
currval state, either. Per report from Kris Jurka that an implicit
ALTER SEQUENCE OWNED BY unexpectedly caused currval() to become valid.
Since this isn't 100% backwards compatible, it will go into HEAD only;
I'll put a more limited patch into 8.2.
in corner cases such as re-fetching a just-deleted row. We may be able to
relax this someday, but let's find out how many people really care before
we invest a lot of work in it. Per report from Heikki and subsequent
discussion.
While in the neighborhood, make the combination of INSENSITIVE and FOR UPDATE
throw an error, since they are semantically incompatible. (Up to now we've
accepted but just ignored the INSENSITIVE option of DECLARE CURSOR.)
if there are zero rows to aggregate over, and the API seems both conceptually
and notationally ugly anyway. We should look for something that improves
on the tsquery-and-text-SELECT version (which is also pretty ugly but at
least it works...), but it seems that will take query infrastructure that
doesn't exist today. (Hm, I wonder if there's anything in or near SQL2003
window functions that would help?) Per discussion.
categories, as per discussion. asciiword (formerly lword) is still
ASCII-letters-only, and numword (formerly word) is still the most general
mixed-alpha-and-digits case. But word (formerly nlword) is now
any-group-of-letters-with-at-least-one-non-ASCII, rather than all-non-ASCII as
before. This is no worse than before for parsing mixed Russian/English text,
which seems to have been the design center for the original coding; and it
should simplify matters for parsing most European languages. In particular
it will not be necessary for any language to accept strings containing digits
as being regular "words". The hyphenated-word categories are adjusted
similarly.
active dictionary and its output lexemes as separate columns, instead
of smashing them into one text column, and lowercase the column names.
Also, define the output rowtype using OUT parameters instead of a
composite type, to be consistent with the other built-in functions.
Notably, standardize on using "token" for the strings output by a parser,
while "lexeme" is reserved for the normalized strings produced by a
dictionary.