(table or index) before trying to open its relcache entry. This fixes
race conditions in which someone else commits a change to the relation's
catalog entries while we are in process of doing relcache load. Problems
of that ilk have been reported sporadically for years, but it was not
really practical to fix until recently --- for instance, the recent
addition of WAL-log support for in-place updates helped.
Along the way, remove pg_am.amconcurrent: all AMs are now expected to support
concurrent update.
- predefined variable "tps"
The value of variable tps is taken from the scaling factor
specified by -s option.
- -D option
Variable values can be defined by -D option.
- \set command now allows arithmetic calculations.
Update the calling convention for all external facing functions. By
external facing, I mean all functions that are directly referenced in
cube.sql. Prior to my update, all functions used the older V0 calling
convention. They now use V1.
New Functions:
cube(float[]), which makes a zero volume cube from a float array
cube(float[], float[]), which allows the user to create a cube from
two float arrays; one for the upper right and one for the lower left
coordinate.
cube_subset(cube, int4[]), to allow you to reorder or choose a subset of
dimensions from a cube, using index values specified in the array.
Joshua Reich
Few cleanups and couple of new things:
- add SHA2 algorithm to older OpenSSL
- add BIGNUM math to have public-key cryptography work on non-OpenSSL
build.
- gen_random_bytes() function
The status of SHA2 algoritms and public-key encryption can now be
changed to 'always available.'
That makes pgcrypto functionally complete and unless there will be new
editions of AES, SHA2 or OpenPGP standards, there is no major changes
planned.
- Replace sorted array of entries in maintenance_work_mem to binary tree,
this should improve create performance.
- More precisely calculate allocated memory, eliminate leaks
with user-defined extractValue()
- Improve wordings in tsearch2
This is an extension of pgstattuple to query information from indexes.
It supports btree, hash and gist. Gin is not supported. It scans only
index pages and does not read corresponding heap tuples. Therefore,
'dead_tuple' means the number of tuples with LP_DELETE flag.
Also, I added an experimental feature for btree indexes. It checks
fragmentation factor of indexes. If an leaf has the right link on the
next adjacent page in the file, it is assumed to be continuous (not
fragmented). It will help us to decide when to REINDEX.
ITAGAKI Takahiro
> Upstream confirmed my reply in the last mail in [1]: the complete
> escaping logic in DBMirror.pl is seriously screwew.
>
> [1] http://archives.postgresql.org/pgsql-bugs/2006-06/msg00065.php
I finally found some time to debug this, and I think I found a better
patch than the one you proposed. Mine is still hackish and is still a
workaround around a proper quoting solution, but at least it repairs
the parsing without introducing the \' quoting again.
I consider this a band-aid patch to fix the recent security update.
PostgreSQL gurus, would you consider applying this until a better
solution is found for DBMirror.pl?
Olivier, can you please confirm that the patch works for you, too?
Backpatched to 8.0.X.
Martin Pitt
* new split algorithm (as proposed in http://archives.postgresql.org/pgsql-hackers/2006-06/msg00254.php)
* possible call pickSplit() for second and below columns
* add spl_(l|r)datum_exists to GIST_SPLITVEC -
pickSplit should check its values to use already defined
spl_(l|r)datum for splitting. pickSplit should set
spl_(l|r)datum_exists to 'false' (if they was 'true') to
signal to caller about using spl_(l|r)datum.
* support for old pickSplit(): not very optimal
but correct split
* remove 'bytes' field from GISTENTRY: in any case size of
value is defined by it's type.
* split GIST_SPLITVEC to two structures: one for using in picksplit
and second - for internal use.
* some code refactoring
* support of subsplit to rtree opclasses
TODO: add support of subsplit to contrib modules
tuples with less header overhead than a regular HeapTuple, per my
recent proposal. Teach TupleTableSlot code how to deal with these.
As proof of concept, change tuplestore.c to store MinimalTuples instead
of HeapTuples. Future patches will expand the concept to other places
where it is useful.
initially be 0. This is needed as a previous ABORT might have wiped out
an automatically opened transaction without maintaining the cursor count.
- Fix regression test expected file for the correct ERROR message, which
we now get given the above bug fix.
used by OpenOffice. Dictionaries are placed at
http://lingucomponent.openoffice.org/spell_dic.html
Dictionary automatically recognizes format of files.
Warning. MySpell's format has limitation with compound
word support: it's impossible to mark affix as
compound-only affix. So for norwegian, german etc
languages it's recommended to use original ispell format.
For that reason I don't want to remove my2ispell
scripts, it's has workaround at least for norwegian language.
It required some changes in lexize algorithm, but interface with
dictionaries stays compatible with old dictionaries.
Funded by Georgia Public Library Service and LibLime, Inc.
versions of OpenSSL. If your OpenSSL does not contain SHA2, then there
should be no conflict. But ofcourse, if someone upgrades OpenSSL,
server starts crashing.
Backpatched to 8.1.X.
Marko Kreen
any use in the past many years, we'd have made some effort to include
them in all executor node types; but in fact they were only in
nodeAppend.c and nodeIndexscan.c, up until I copied nodeIndexscan.c's
occurrence into the new bitmap node types. Remove some other unused
macros in execdebug.h, too. Some day the whole header probably ought to
go away in favor of better-designed facilities.
pg_freespacemap_relations --- while one could theoretically get that
number by counting rows in pg_freespacemap_pages, it's surely the hard
way to do it. Avoid expensive and inconvenient conversion to and from
text format. Minor code and docs cleanup.
tracks index pages, not free space on pages):
1/ Index free bytes set to NULL
2/ Comment added to the README briefly mentioning the index business
3/ Columns reordered more logically
4/ 'Blockid' column removed
5/ Free bytes column renamed to just 'bytes' instead of 'blockfreebytes'
Mark Kirkwood
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
Most of the changes add the mandatory USING clause to DROP OPERATOR
CLASS statements. DROP TYPE is now DROP TYPE CASCADE; without
CASCADE a DROP TYPE fails due to the circular dependency on the
type's I/O functions. The DROP FUNCTION statements for the I/O
functions have been removed, as DROP TYPE CASCADE removes them
automatically. Patch from Michael Fuhr.
similar constants if they were not previously defined. All these
constants must be defined by limits.h according to C89, so we can
safely assume they are present.
(respectively) to rename yylex and related symbols. Some were doing
it this way already, while others used not-too-reliable sed hacks in
the Makefiles. It's all nice and consistent now.
1) rank_cd now use weight of lexemes
2) rank_cd and rank can use any combination of normalization methods:
no normalization
normalization by log(length of document)
-----/------- by length of document
-----/------- by number of unique word in document
-----/------- by log(number of unique word in document)
-----/------- by number of covers (only rank_cd)
Improve cover's search.
TODO: changes in documentation
are unnecessarily allocated on the heap rather than the stack. If the
StringInfo doesn't outlive the stack frame in which it is created,
there is no need to allocate it on the heap via makeStringInfo() --
stack allocation is faster. While it's not a big deal unless the
code is in a critical path, I don't see a reason not to save a few
cycles -- using stack allocation is not less readable.
I also cleaned up a bit of code along the way: moved variable
declarations into a more tightly-enclosing scope where possible,
fixed some pointless copying of strings in dblink, etc.
more compliant with the error message style guide. In particular,
errdetail should begin with a capital letter and end with a period,
whereas errmsg should not. I also fixed a few related issues in
passing, such as fixing the repeated misspelling of "lexeme" in
contrib/tsearch2 (per Tom's suggestion).
pgcrypto crypt()/md5 and hmac() leak memory when compiled against
OpenSSL as openssl.c digest ->reset will do two DigestInit calls
against a context. This happened to work with OpenSSL 0.9.6
but not with 0.9.7+.
Reason for the messy code was that I tried to avoid creating
wrapper structure to transport algorithm info and tried to use
OpenSSL context for it. The fix is to create wrapper structure.
It also uses newer digest API to avoid memory allocations
on reset with newer OpenSSLs.
Thanks to Daniel Blaisdell for reporting it.
sorry but fix can't be applyed to previous version: it's require
refill tsvector...
2 Small optimize of load time for huge dictionaries
3 use palloc instead of malloc during load dict file
singlebyte encodings, so we should have snowball for every encodings.
I hope that finalize multibyte support work in tsearch2, but testing is needed...
sizebitvec of tsearch2, as well as identical code in several other
contrib modules. This provided about a 20X speedup in building a
large tsearch2 index ... didn't try to measure its effects for other
operations. Thanks to Stephan Vollmer for providing a test case.
the data defining the semantics of a lock method (ie, conflict resolution
table and ancillary data, which is all constant) and the hash tables
storing the current state. The only thing we give up by this is the
ability to use separate hashtables for different lock methods, but there
is no need for that anyway. Put some extra fields into the LockMethod
definition structs to clean up some other uglinesses, like hard-wired
tests for DEFAULT_LOCKMETHOD and USER_LOCKMETHOD. This commit doesn't
do anything about the performance issues we were discussing, but it clears
away some of the underbrush that's in the way of fixing that.
support for the dbf2pg contrib module.
The submitter created a patch which replaces the silent ignoring of -F
(when iconv support is disabled) with a meaningful warning.
Martin Pitt
comment line where output as too long, and update typedefs for /lib
directory. Also fix case where identifiers were used as variable names
in the backend, but as typedefs in ecpg (favor the backend for
indenting).
Backpatch to 8.1.X.
- supports multibyte encodings
- more strict rules for lexemes
- flex isn't used
Add:
- tsquery plainto_tsquery(text)
Function makes tsquery from plain text.
- &&, ||, !! operation for tsquery for combining
tsquery from it's parts: 'foo & bar' || 'asd' => 'foo & bar | asd'
functionality, but I still need to make another pass looking at places
that incidentally use arrays (such as ACL manipulation) to make sure they
are null-safe. Contrib needs work too.
I have not changed the behaviors that are still under discussion about
array comparison and what to do with lower bounds.
1 Comparison operation for tsquery
2 Btree index on tsquery
3 numnode(tsquery) - returns 'length' of tsquery
4 tsquery @ tsquery, tsquery ~ tsquery - contains, contained for tsquery.
Note: They don't gurantee exact result, only MAY BE, so it
useful only for speed up rewrite functions
5 GiST index support for @,~
6 rewrite():
select rewrite(orig, what, to);
select rewrite(ARRAY[orig, what, to]) from tsquery_table;
select rewrite(orig, 'select what, to from tsquery_table;');
7 significantly improve cover algorithm