diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 326e457030..9ed65a30b2 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -1,4 +1,4 @@ - + Data Types @@ -237,13 +237,13 @@ tsquery - full text search query + text search query tsvector - full text search document + text search document @@ -3232,6 +3232,211 @@ SELECT * FROM test; + + Text Search Types + + + full text search + data types + + + + text search + data types + + + + PostgreSQL provides two data types that + are designed to support full text search, which is the activity of + searching through a collection of natural-language documents + to locate those that best match a query. + The tsvector type represents a document in a form suited + for text search, while the tsquery type similarly represents + a query. + provides a detailed explanation of this + facility, and summarizes the + related functions and operators. + + + + <type>tsvector</type> + + + tsvector (data type) + + + + A tsvector value is a sorted list of distinct + lexemes, which are words that have been + normalized to make different variants of the same word look + alike (see for details). Sorting and + duplicate-elimination are done automatically during input, as shown in + this example: + + +SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector; + tsvector +---------------------------------------------------- + 'a' 'on' 'and' 'ate' 'cat' 'fat' 'mat' 'rat' 'sat' + + + (As the example shows, the sorting is first by length and then + alphabetically, but that detail is seldom important.) To represent + lexemes containing whitespace, surround them with quotes: + + +SELECT $$the lexeme ' ' contains spaces$$::tsvector; + tsvector +------------------------------------------- + 'the' ' ' 'lexeme' 'spaces' 'contains' + + + (We use dollar-quoted string literals in this example and the next one, + to avoid confusing matters by having to double quote marks within the + literals.) Embedded quotes can be handled by doubling them: + + +SELECT $$the lexeme 'Joe''s' contains a quote$$::tsvector; + tsvector +------------------------------------------------ + 'a' 'the' 'Joe''s' 'quote' 'lexeme' 'contains' + + + Optionally, integer position(s) + can be attached to any or all of the lexemes: + + +SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::tsvector; + tsvector +------------------------------------------------------------------------------- + 'a':1,6,10 'on':5 'and':8 'ate':9 'cat':3 'fat':2,11 'mat':7 'rat':12 'sat':4 + + + A position normally indicates the source word's location in the + document. Positional information can be used for + proximity ranking. Position values can + range from 1 to 16383; larger numbers are silently clamped to 16383. + Duplicate position entries are discarded. + + + + Lexemes that have positions can further be labeled with a + weight, which can be A, + B, C, or D. + D is the default and hence is not shown on output: + + +SELECT 'a:1A fat:2B,4C cat:5D'::tsvector; + tsvector +---------------------------- + 'a':1A 'cat':5 'fat':2B,4C + + + Weights are typically used to reflect document structure, for example + by marking title words differently from body words. Text search + ranking functions can assign different priorities to the different + weight markers. + + + + It is important to understand that the + tsvector type itself does not perform any normalization; + it assumes that the words it is given are normalized appropriately + for the application. For example, + + +select 'The Fat Rats'::tsvector; + tsvector +-------------------- + 'Fat' 'The' 'Rats' + + + For most English-text-searching applications the above words would + be considered non-normalized, but tsvector doesn't care. + Raw document text should usually be passed through + to_tsvector to normalize the words appropriately + for searching: + + +SELECT to_tsvector('english', 'The Fat Rats'); + to_tsvector +----------------- + 'fat':2 'rat':3 + + + Again, see for more detail. + + + + + + <type>tsquery</type> + + + tsquery (data type) + + + + A tsquery value stores lexemes that are to be + searched for, and combines them using the boolean operators + & (AND), | (OR), and + ! (NOT). Parentheses can be used to enforce grouping + of the operators: + + + SELECT 'fat & rat'::tsquery; + tsquery +--------------- + 'fat' & 'rat' + +SELECT 'fat & (rat | cat)'::tsquery; + tsquery +--------------------------- + 'fat' & ( 'rat' | 'cat' ) + +SELECT 'fat & rat & ! cat'::tsquery; + tsquery +------------------------ + 'fat' & 'rat' & !'cat' + + + In the absence of parentheses, ! (NOT) binds most tightly, + and & (AND) binds more tightly than + | (OR). + + + + Optionally, lexemes in a tsquery can be labeled with + one or more weight letters, which restricts them to match only + tsvector lexemes with one of those weights: + + +SELECT 'fat:ab & cat'::tsquery; + tsquery +------------------ + 'fat':AB & 'cat' + + + + + Quoting rules for lexemes are the same as described above for + lexemes in tsvector; and, as with tsvector, + any required normalization of words must be done before putting + them into the tsquery type. The to_tsquery + function is convenient for performing such normalization: + + +SELECT to_tsquery('Fat:ab & Cats'); + to_tsquery +------------------ + 'fat':AB & 'cat' + + + + + + + <acronym>UUID</acronym> Type @@ -3240,18 +3445,16 @@ SELECT * FROM test; - The data type uuid stores Universally Unique - Identifiers (UUID) as per RFC 4122, ISO/IEC 9834-8:2005, and - related standards. (Some systems refer to this data type as - globally unique - identifier/GUIDGUID - instead.) Such an identifier is a 128-bit quantity that is - generated by a suitable algorithm so that it is very unlikely to - be generated by anyone else in the known universe using the same - algorithm. Therefore, for distributed systems, these identifiers - provide a better uniqueness guarantee than that which can be - achieved using sequence generators, which are only unique within a - single database. + The data type uuid stores Universally Unique Identifiers + (UUID) as defined by RFC 4122, ISO/IEC 9834-8:2005, and related standards. + (Some systems refer to this data type as globally unique identifier, or + GUID,GUID instead.) Such an + identifier is a 128-bit quantity that is generated by an algorithm chosen + to make it very unlikely that the same identifier will be generated by + anyone else in the known universe using the same algorithm. Therefore, + for distributed systems, these identifiers provide a better uniqueness + guarantee than that which can be achieved using sequence generators, which + are only unique within a single database. @@ -3263,7 +3466,8 @@ SELECT * FROM test; a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11 - PostgreSQL also accepts the following alternative forms for input: + PostgreSQL also accepts the following + alternative forms for input: use of upper-case digits, the standard format surrounded by braces, and omitting the hyphens. Examples are: @@ -3275,140 +3479,17 @@ a0eebc999c0b4ef8bb6d6bb9bd380a11 - To generate UUIDs, the contrib module uuid-ossp - provides functions that implement the standard algorithms. + PostgreSQL provides storage and comparison + functions for UUIDs, but the core database does not include any + function for generating UUIDs, because no single algorithm is well + suited for every application. The contrib module + contrib/uuid-ossp provides functions that implement + several standard algorithms. Alternatively, UUIDs could be generated by client applications or other libraries invoked through a server-side function. - - Full Text Search - - - - - tsvector - - - - tsvector - tsvector is a data type - that represents a document and is optimized for full text searching. - In the simplest case, tsvector is a sorted list of - lexemes, so even without indexes full text searches perform better - than standard ~ and LIKE - operations: - - -SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector; - tsvector ----------------------------------------------------- - 'a' 'on' 'and' 'ate' 'cat' 'fat' 'mat' 'rat' 'sat' - - - Notice, that space is also a lexeme: - - -SELECT 'space '' '' is a lexeme'::tsvector; - tsvector ----------------------------------- - 'a' 'is' ' ' 'space' 'lexeme' - - - Each lexeme, optionally, can have positional information which is used for - proximity ranking: - - -SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::tsvector; - tsvector -------------------------------------------------------------------------------- - 'a':1,6,10 'on':5 'and':8 'ate':9 'cat':3 'fat':2,11 'mat':7 'rat':12 'sat':4 - - - Each lexeme position also can be labeled as A, - B, C, D, - where D is the default. These labels can be used to group - lexemes into different importance or - rankings, for example to reflect document structure. - Actual values can be assigned at search time and used during the calculation - of the document rank. This is very useful for controlling search results. - - - - The concatenation operator, e.g. tsvector || tsvector, - can "construct" a document from several parts. The order is important if - tsvector contains positional information. Of course, - it is also possible to build a document using different tables: - - -SELECT 'fat:1 cat:2'::tsvector || 'fat:1 rat:2'::tsvector; - ?column? ---------------------------- - 'cat':2 'fat':1,3 'rat':4 - -SELECT 'fat:1 rat:2'::tsvector || 'fat:1 cat:2'::tsvector; - ?column? ---------------------------- - 'cat':4 'fat':1,3 'rat':2 - - - - - - - - - - tsquery - - - - tsquery - tsquery is a data type - for textual queries which supports the boolean operators - & (AND), | (OR), and - parentheses. A tsquery consists of lexemes (optionally - labeled by letters) with boolean operators in between: - - -SELECT 'fat & cat'::tsquery; - tsquery ---------------- - 'fat' & 'cat' -SELECT 'fat:ab & cat'::tsquery; - tsquery ------------------- - 'fat':AB & 'cat' - - - Labels can be used to restrict the search region, which allows the - development of different search engines using the same full text index. - - - - tsqueries can be concatenated using && (AND) - and || (OR) operators: - - -SELECT 'a & b'::tsquery && 'c | d'::tsquery; - ?column? ---------------------------- - 'a' & 'b' & ( 'c' | 'd' ) - -SELECT 'a & b'::tsquery || 'c|d'::tsquery; - ?column? ---------------------------- - 'a' & 'b' | ( 'c' | 'd' ) - - - - - - - - - <acronym>XML</> Type diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 8d4f4179ac..afdda69720 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -1,4 +1,4 @@ - + Functions and Operators @@ -7595,915 +7595,319 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Full Text Search Functions and Operators + Text Search Functions and Operators + + + full text search + functions and operators + + + + text search + functions and operators + - This section outlines all the functions and operators that are available - for full text searching. + , + and + + summarize the functions and operators that are provided + for full text searching. See for a detailed + explanation of PostgreSQL's text search + facility. - - Full text search vectors and queries both use lexemes, but for different - purposes. A tsvector represents the lexemes (tokens) parsed - out of a document, with an optional position. A tsquery - specifies a boolean condition using lexemes. - + + Text Search Operators + + + + Operator + Description + Example + Result + + + + + @@ + tsvector matches tsquery ? + to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat') + t + + + @@@ + same as @@, but see + to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat') + t + + + || + concatenate tsvectors + 'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector + 'a':1 'b':2,5 'c':3 'd':4 + + + && + AND tsquerys together + 'fat | rat'::tsquery && 'cat'::tsquery + ( 'fat' | 'rat' ) & 'cat' + + + || + OR tsquerys together + 'fat | rat'::tsquery || 'cat'::tsquery + ( 'fat' | 'rat' ) | 'cat' + + + !! + negate a tsquery + !! 'cat'::tsquery + !'cat' + + + @> + tsquery contains another ? + 'cat'::tsquery @> 'cat & rat'::tsquery + f + + + <@ + tsquery is contained in ? + 'cat'::tsquery <@ 'cat & rat'::tsquery + t + + + +
- - All of the following functions that accept a configuration argument can - use a textual configuration name to select a configuration. If the option - is omitted the configuration specified by - default_text_search_config is used. For more information on - configuration, see . - + + + The tsquery containment operators consider only the lexemes + listed in the two queries, ignoring the combining operators. + + - - Search + + In addition to the operators shown in the table, the ordinary B-tree + comparison operators (=, <, etc) are defined + for types tsvector and tsquery. These are not very + useful for text searching but allow, for example, unique indexes to be + built on columns of these types. + - The operator @@ is used to perform full text - searches: - - - - - - - - TSVECTOR @@ TSQUERY - - - - - - TSVECTOR @@ TSQUERY - TSQUERY @@ TSVECTOR - - - - - - Returns true if TSQUERY is contained - in TSVECTOR, and false if not: - - -SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector @@ 'cat & rat'::tsquery; - ?column? ----------- - t - -SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector @@ 'fat & cow'::tsquery; - ?column? ----------- - f - - - - - - - - - - TEXT @@ TSQUERY - - - - - text @@ tsquery - - - - - - Returns true if TSQUERY is contained - in TEXT, and false if not: - - -SELECT 'a fat cat sat on a mat and ate a fat rat'::text @@ 'cat & rat'::tsquery; - ?column? ----------- - t - -SELECT 'a fat cat sat on a mat and ate a fat rat'::text @@ 'cat & cow'::tsquery; - ?column? ----------- - f - - - - - - - - - TEXT @@ TEXT - - - - - - text @@ text - - - - - - Returns true if the right - argument (the query) is contained in the left argument, and - false otherwise: - - -SELECT 'a fat cat sat on a mat and ate a fat rat' @@ 'cat rat'; - ?column? ----------- - t - -SELECT 'a fat cat sat on a mat and ate a fat rat' @@ 'cat cow'; - ?column? ----------- - f - - - - - - - + + Text Search Functions + + + + Function + Return Type + Description + Example + Result + + + + + to_tsvector( config regconfig , document text) + tsvector + reduce document text to tsvector + to_tsvector('english', 'The Fat Rats') + 'fat':2 'rat':3 + + + length(tsvector) + integer + number of lexemes in tsvector + length('fat:2,4 cat:3 rat:5A'::tsvector) + 3 + + + setweight(tsvector, "char") + tsvector + assign weight to each element of tsvector + setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A') + 'cat':3A 'fat':2A,4A 'rat':5A + + + strip(tsvector) + tsvector + remove positions and weights from tsvector + strip('fat:2,4 cat:3 rat:5A'::tsvector) + 'cat' 'fat' 'rat' + + + to_tsquery( config regconfig , query text) + tsquery + normalize words and convert to tsquery + to_tsquery('english', 'The & Fat & Rats') + 'fat' & 'rat' + + + plainto_tsquery( config regconfig , query text) + tsquery + produce tsquery ignoring punctuation + plainto_tsquery('english', 'The Fat Rats') + 'fat' & 'rat' + + + numnode(tsquery) + integer + number of lexemes plus operators in tsquery + numnode('(fat & rat) | cat'::tsquery) + 5 + + + querytree(query tsquery) + text + get indexable part of a tsquery + querytree('foo & ! bar'::tsquery) + 'foo' + + + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) + float4 + rank document for query + ts_rank(textsearch, query) + 0.818 + + + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) + float4 + rank document for query using cover density + ts_rank_cd('{0.1, 0.2, 0.4, 1.0}', textsearch, query) + 2.01317 + + + ts_headline( config regconfig, document text, query tsquery , options text ) + text + display a query match + ts_headline('x y z', 'z'::tsquery) + x y <b>z</b> + + + ts_rewrite(query tsquery, target tsquery, substitute tsquery) + tsquery + replace target with substitute within query + ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo|bar'::tsquery) + 'b' & ( 'foo' | 'bar' ) + + + ts_rewrite(query tsquery, select text) + tsquery + replace using targets and substitutes from a SELECT command + SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases') + 'b' & ( 'foo' | 'bar' ) + + + get_current_ts_config() + regconfig + get default text search configuration + get_current_ts_config() + english + + + tsvector_update_trigger() + trigger + trigger function for automatic tsvector column update + CREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body) + + + + tsvector_update_trigger_column() + trigger + trigger function for automatic tsvector column update + CREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body) + + + + + +
+ - For index support of full text operators consult . + All the text search functions that accept an optional regconfig + argument will use the configuration specified by + + when that argument is omitted. - -
- - - tsvector - - - - - - - to_tsvector - - - - - to_tsvector(config_name, document TEXT) returns TSVECTOR - - - - - - Parses a document into tokens, reduces the tokens to lexemes, and returns a - tsvector which lists the lexemes together with their positions in the document - in lexicographic order. - - - - - - - - - strip - - - - - strip(vector TSVECTOR) returns TSVECTOR - - - - - - Returns a vector which lists the same lexemes as the given vector, but - which lacks any information about where in the document each lexeme - appeared. While the returned vector is useless for relevance ranking it - will usually be much smaller. - - - - - - - - - setweight - - - - - setweight(vector TSVECTOR, letter) returns TSVECTOR - - - - - - This function returns a copy of the input vector in which every location - has been labeled with either the letter A, - B, or C, or the default label - D (which is the default for new vectors - and as such is usually not displayed). These labels are retained - when vectors are concatenated, allowing words from different parts of a - document to be weighted differently by ranking functions. - - - - - - - - tsvector concatenation - - - - - vector1 || vector2 - tsvector_concat(vector1 TSVECTOR, vector2 TSVECTOR) returns TSVECTOR - - - - - - Returns a vector which combines the lexemes and positional information of - the two vectors given as arguments. Positional weight labels (described - in the previous paragraph) are retained during the concatenation. This - has at least two uses. First, if some sections of your document need to be - parsed with different configurations than others, you can parse them - separately and then concatenate the resulting vectors. Second, you can - weigh words from one section of your document differently than the others - by parsing the sections into separate vectors and assigning each vector - a different position label with the setweight() - function. You can then concatenate them into a single vector and provide - a weights argument to the ts_rank() function that assigns - different weights to positions with different labels. - - - - - - - - length(tsvector) - - - - - length(vector TSVECTOR) returns INT4 - - - - - - Returns the number of lexemes stored in the vector. - - - - - - - - text::tsvector - - - - - text::TSVECTOR returns TSVECTOR - - - - - - Directly casting text to a tsvector allows you - to directly inject lexemes into a vector with whatever positions and - positional weights you choose to specify. The text should be formatted to - match the way a vector is displayed by SELECT. - - - - - - - - - trigger - for updating a derived tsvector column - - - - - tsvector_update_trigger(tsvector_column_name, config_name, text_column_name , ... ) - tsvector_update_trigger_column(tsvector_column_name, config_column_name, text_column_name , ... ) - - - - - - Two built-in trigger functions are available to automatically update a - tsvector column from one or more textual columns. An example - of their use is: - - -CREATE TABLE tblMessages ( - strMessage text, - tsv tsvector -); - -CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE -ON tblMessages FOR EACH ROW EXECUTE PROCEDURE -tsvector_update_trigger(tsv, 'pg_catalog.english', strMessage); - - - Having created this trigger, any change in strMessage - will be automatically reflected into tsv. - - - - Both triggers require you to specify the text search configuration to - be used to perform the conversion. For - tsvector_update_trigger, the configuration name is simply - given as the second trigger argument. It must be schema-qualified as - shown above, so that the trigger behavior will not change with changes - in search_path. For - tsvector_update_trigger_column, the second trigger argument - is the name of another table column, which must be of type - regconfig. This allows a per-row selection of configuration - to be made. - - - - - - - - ts_stat - - - - - ts_stat(sqlquery text , weights text ) returns SETOF statinfo - - - - - - Here statinfo is a type, defined as: - - -CREATE TYPE statinfo AS (word text, ndoc integer, nentry integer); - - - and sqlquery is a text value containing a SQL query - which returns a single tsvector column. ts_stat - executes the query and returns statistics about the resulting - tsvector data, i.e., the number of documents, ndoc, - and the total number of words in the collection, nentry. It is - useful for checking your configuration and to find stop word candidates. For - example, to find the ten most frequent words: - - -SELECT * FROM ts_stat('SELECT vector from apod') -ORDER BY ndoc DESC, nentry DESC, word -LIMIT 10; - - - Optionally, one can specify weights to obtain - statistics about words with a specific weight: - - -SELECT * FROM ts_stat('SELECT vector FROM apod','a') -ORDER BY ndoc DESC, nentry DESC, word -LIMIT 10; - - - - - - - - - - Btree operations for tsvector - - - - - TSVECTOR < TSVECTOR - TSVECTOR <= TSVECTOR - TSVECTOR = TSVECTOR - TSVECTOR >= TSVECTOR - TSVECTOR > TSVECTOR - - - - - - All btree operations are defined for the tsvector type. - tsvectors are compared with each other using - lexicographical ordering. - - - - - - - - - - - tsquery - - - - - - - - to_tsquery - - - - - to_tsquery(config_name, querytext text) returns TSQUERY - - - - - - Accepts querytext, which should consist of single tokens - separated by the boolean operators & (and), | - (or) and ! (not), which can be grouped using parentheses. - In other words, to_tsquery expects already parsed text. - Each token is reduced to a lexeme using the specified or current configuration. - A weight class can be assigned to each lexeme entry to restrict the search region - (see setweight for an explanation). For example: - - -'fat:a & rats' - - - The to_tsquery function can also accept a text - string. In this case querytext should - be quoted. This may be useful, for example, to use with a thesaurus - dictionary. In the example below, a thesaurus contains rule supernovae - stars : sn: - - -SELECT to_tsquery('''supernovae stars'' & !crab'); - to_tsquery ---------------- - 'sn' & !'crab' - - - Without quotes to_tsquery will generate a syntax error. - - - - - - - - - - - plainto_tsquery - - - - - plainto_tsquery(config_name, querytext text) returns TSQUERY - - - - - - Transforms unformatted text querytext to tsquery. - It is the same as to_tsquery but accepts text - without quotes and will call the parser to break it into tokens. - plainto_tsquery assumes the & boolean - operator between words and does not recognize weight classes. - - - - - - - - - - querytree - - - - - querytree(query TSQUERY) returns TEXT - - - - - - This returns the query used for searching an index. It can be used to test - for an empty query. The SELECT below returns NULL, - which corresponds to an empty query since GIN indexes do not support queries with negation - - (a full index scan is inefficient): - - -SELECT querytree(to_tsquery('!defined')); - querytree ------------ - - - - - - - - - - text::tsquery casting - - - - - text::TSQUERY returns TSQUERY - - - - - - Directly casting text to a tsquery - allows you to directly inject lexemes into a query using whatever positions - and positional weight flags you choose to specify. The text should be - formatted to match the way a vector is displayed by - SELECT. - - - - - - - - - numnode - - - - - numnode(query TSQUERY) returns INTEGER - - - - - - This returns the number of nodes in a query tree. This function can be - used to determine if query is meaningful - (returns > 0), or contains only stop words (returns 0): - - -SELECT numnode(plainto_tsquery('the any')); -NOTICE: query contains only stopword(s) or does not contain lexeme(s), ignored - numnode ---------- - 0 - -SELECT numnode(plainto_tsquery('the table')); - numnode ---------- - 1 - -SELECT numnode(plainto_tsquery('long table')); - numnode ---------- - 3 - - - - - - - - - TSQUERY && TSQUERY - - - - - TSQUERY && TSQUERY returns TSQUERY - - - - - - Returns AND-ed TSQUERY - - - - - - - - TSQUERY || TSQUERY - - - - - TSQUERY || TSQUERY returns TSQUERY - - - - - - Returns OR-ed TSQUERY - - - - - - - - !! TSQUERY - - - - - !! TSQUERY returns TSQUERY - - - - - - negation of TSQUERY - - - - - - - - Btree operations for tsquery - - - - - TSQUERY < TSQUERY - TSQUERY <= TSQUERY - TSQUERY = TSQUERY - TSQUERY >= TSQUERY - TSQUERY > TSQUERY - - - - - - All btree operations are defined for the tsquery type. - tsqueries are compared to each other using lexicographical - ordering. - - - - - - - - Query Rewriting - - - Query rewriting is a set of functions and operators for the - tsquery data type. It allows control at search - query time without reindexing (the opposite of the - thesaurus). For example, you can expand the search using synonyms - (new york, big apple, nyc, - gotham) or narrow the search to direct the user to some hot - topic. - - - - The ts_rewrite() function changes the original query by - replacing part of the query with some other string of type tsquery, - as defined by the rewrite rule. Arguments to ts_rewrite() - can be names of columns of type tsquery. - - - -CREATE TABLE aliases (t TSQUERY PRIMARY KEY, s TSQUERY); -INSERT INTO aliases VALUES('a', 'c'); - - - - - - - - ts_rewrite - - - - - ts_rewrite (query TSQUERY, target TSQUERY, sample TSQUERY) returns TSQUERY - - - - - - -SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery); - ts_rewrite ------------- - 'b' & 'c' - - - - - - - - - - ts_rewrite(ARRAY[query TSQUERY, target TSQUERY, sample TSQUERY]) returns TSQUERY - - - - - - -SELECT ts_rewrite(ARRAY['a & b'::tsquery, t,s]) FROM aliases; - ts_rewrite ------------- - 'b' & 'c' - - - - - - - - - - ts_rewrite (query TSQUERY,'SELECT target ,sample FROM test'::text) returns TSQUERY - - - - - - -SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases'); - ts_rewrite ------------- - 'b' & 'c' - - - - - - - - - What if there are several instances of rewriting? For example, query - 'a & b' can be rewritten as - 'b & c' and 'cc'. - - -SELECT * FROM aliases; - t | s ------------+------ - 'a' | 'c' - 'x' | 'z' - 'a' & 'b' | 'cc' - - - This ambiguity can be resolved by specifying a sort order: - - -SELECT ts_rewrite('a & b', 'SELECT t, s FROM aliases ORDER BY t DESC'); - ts_rewrite - --------- - 'cc' - -SELECT ts_rewrite('a & b', 'SELECT t, s FROM aliases ORDER BY t ASC'); - ts_rewrite --------------- - 'b' & 'c' - - - - - Let's consider a real-life astronomical example. We'll expand query - supernovae using table-driven rewriting rules: - - -CREATE TABLE aliases (t tsquery primary key, s tsquery); -INSERT INTO aliases VALUES(to_tsquery('supernovae'), to_tsquery('supernovae|sn')); - -SELECT ts_rewrite(to_tsquery('supernovae'), 'SELECT * FROM aliases') && to_tsquery('crab'); - ?column? -------------------------------- -( 'supernova' | 'sn' ) & 'crab' - - - Notice, that we can change the rewriting rule online: - - -UPDATE aliases SET s=to_tsquery('supernovae|sn & !nebulae') WHERE t=to_tsquery('supernovae'); -SELECT ts_rewrite(to_tsquery('supernovae'), 'SELECT * FROM aliases') && to_tsquery('crab'); - ?column? ------------------------------------------------ - 'supernova' | 'sn' & !'nebula' ) & 'crab' - - - - - - Operators For tsquery - - - Rewriting can be slow for many rewriting rules since it checks every rule - for a possible hit. To filter out obvious non-candidate rules there are containment - operators for the tsquery type. In the example below, we select only those - rules which might contain the original query: - - -SELECT ts_rewrite(ARRAY['a & b'::tsquery, t,s]) -FROM aliases -WHERE 'a & b' @> t; - ts_rewrite ------------- - 'b' & 'c' - - - - - - Two operators are defined for tsquery: - - - - - - - - TSQUERY @> TSQUERY - - - - - TSQUERY @> TSQUERY - - - - - - Returns true if the right argument might be contained in left argument. - - - - - - - - tsquery <@ tsquery - - - - - TSQUERY <@ TSQUERY - - - - - - Returns true if the left argument might be contained in right argument. - - - - - - - - - - - Index For tsquery - - - To speed up operators <@ and @> for - tsquery one can use a GiST index with - a tsquery_ops opclass: - - -CREATE INDEX t_idx ON aliases USING gist (t tsquery_ops); - - - - - - + + + + The functions in + + are listed separately because they are not usually used in everyday text + searching operations. They are helpful for development and debugging + of new text search configurations. + + + + Text Search Debugging Functions + + + + Function + Return Type + Description + Example + Result + + + + + ts_debug( config regconfig, document text) + setof ts_debug + test a configuration + ts_debug('english', 'The Brightest supernovaes') + (lword,"Latin word",The,{english_stem},"english_stem: {}") ... + + + ts_lexize(dict regdictionary, token text) + text[] + test a dictionary + ts_lexize('english_stem', 'stars') + {star} + + + ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) + setof record + test a parser + ts_parse('default', 'foo - bar') + (1,foo) ... + + + ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) + setof record + test a parser + ts_parse(3722, 'foo - bar') + (1,foo) ... + + + ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) + setof record + get token types defined by parser + ts_token_type('default') + (1,lword,"Latin word") ... + + + ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) + setof record + get token types defined by parser + ts_token_type(3722) + (1,lword,"Latin word") ... + + + ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) + setof record + get statistics of a tsvector column + ts_stat('SELECT vector from apod') + (foo,10,15) ... + + + +
@@ -11653,12 +11057,12 @@ SELECT has_table_privilege('myschema.mytable', 'select'); shows functions that determine whether a certain object is visible in the - current schema search path. A table is said to be visible if its + current schema search path. + For example, a table is said to be visible if its containing schema is in the search path and no table of the same name appears earlier in the search path. This is equivalent to the statement that the table can be referenced by name without explicit - schema qualification. For example, to list the names of all - visible tables: + schema qualification. To list the names of all visible tables: SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); @@ -11702,6 +11106,30 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); boolean is table visible in search path
+ + pg_ts_config_is_visible(config_oid) + + boolean + is text search configuration visible in search path + + + pg_ts_dict_is_visible(dict_oid) + + boolean + is text search dictionary visible in search path + + + pg_ts_parser_is_visible(parser_oid) + + boolean + is text search parser visible in search path + + + pg_ts_template_is_visible(template_oid) + + boolean + is text search template visible in search path + pg_type_is_visible(type_oid) @@ -11727,19 +11155,25 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); pg_table_is_visible + + pg_ts_config_is_visible + + + pg_ts_dict_is_visible + + + pg_ts_parser_is_visible + + + pg_ts_template_is_visible + pg_type_is_visible - pg_conversion_is_visible, - pg_function_is_visible, - pg_operator_is_visible, - pg_opclass_is_visible, - pg_table_is_visible, and - pg_type_is_visible perform the visibility check for - conversions, functions, operators, operator classes, tables, and - types. Note that pg_table_is_visible can also be used + Each function performs the visibility check for one type of database + object. Note that pg_table_is_visible can also be used with views, indexes and sequences; pg_type_is_visible can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index 9dfeefea4d..77e14672e9 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -1,4 +1,4 @@ - + Full Text Search @@ -16,18 +16,16 @@ Full Text Searching (or just text search) provides - the capability to identify documents that satisfy a - query, and optionally to sort them by relevance to - the query. The most common type of search + the capability to identify natural-language documents that + satisfy a query, and optionally to sort them by + relevance to the query. The most common type of search is to find all documents containing given query terms and return them in order of their similarity to the query. Notions of query and similarity are very flexible and depend on the specific application. The simplest search considers query as a set of words and similarity as the frequency of query - words in the document. Full text indexing can be done inside the - database or outside. Doing indexing inside the database allows easy access - to document metadata to assist in indexing and display. + words in the document. @@ -41,14 +39,14 @@ - There is no linguistic support, even for English. Regular expressions are - not sufficient because they cannot easily handle derived words, - e.g., satisfies and satisfy. You might + There is no linguistic support, even for English. Regular expressions + are not sufficient because they cannot easily handle derived words, e.g., + satisfies and satisfy. You might miss documents that contain satisfies, although you probably would like to find them when searching for satisfy. It is possible to use OR - to search for any of them, but this is tedious and - error-prone (some words can have several thousand derivatives). + to search for multiple derived forms, but this is tedious and error-prone + (some words can have several thousand derivatives). @@ -61,8 +59,8 @@ - They tend to be slow because they process all documents for every search and - there is no index support. + They tend to be slow because there is no index support, so they must + process all documents for every search. @@ -166,17 +164,17 @@ functions and operators available for these data types (), the most important of which is the match operator @@, which we introduce in - . Full text searches can be accelerated + . Full text searches can be accelerated using indexes (). - What Is a <firstterm>Document</firstterm>? + What Is a Document? - text search - document + document + text search @@ -208,7 +206,7 @@ WHERE mid = did AND mid = 12; - Actually, in the previous example queries, COALESCE + Actually, in these example queries, coalesce should be used to prevent a single NULL attribute from causing a NULL result for the whole document. @@ -221,12 +219,25 @@ WHERE mid = did AND mid = 12; retrieve the document from the file system. However, retrieving files from outside the database requires superuser permissions or special function support, so this is usually less convenient than keeping all - the data inside PostgreSQL. + the data inside PostgreSQL. Also, keeping + everything inside the database allows easy access + to document metadata to assist in indexing and display. + + + + For text search purposes, each document must be reduced to the + preprocessed tsvector format. Searching and ranking + are performed entirely on the tsvector representation + of a document — the original text need only be retrieved + when the document has been selected for display to a user. + We therefore often speak of the tsvector as being the + document, but of course it is only a compact representation of + the full document. - - Performing Searches + + Basic Text Matching Full text searching in PostgreSQL is based on @@ -251,8 +262,8 @@ SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::t As the above example suggests, a tsquery is not just raw text, any more than a tsvector is. A tsquery - contains search terms, which must be already-normalized lexemes, and may - contain AND, OR, and NOT operators. + contains search terms, which must be already-normalized lexemes, and + may combine multiple terms using AND, OR, and NOT operators. (For details see .) There are functions to_tsquery and plainto_tsquery that are helpful in converting user-written text into a proper @@ -277,9 +288,9 @@ SELECT 'fat cats ate fat rats'::tsvector @@ to_tsquery('fat & rat'); f - since here no normalization of the word rats will occur: - the elements of a tsvector are lexemes, which are assumed - already normalized. + since here no normalization of the word rats will occur. + The elements of a tsvector are lexemes, which are assumed + already normalized, so rats does not match rat. @@ -305,14 +316,9 @@ text @@ text - + Configurations - - text search - configurations - - The above are all simple text search examples. As mentioned before, full text search functionality includes the ability to do many more things: @@ -334,7 +340,13 @@ text @@ text throughout the cluster but the same configuration within any one database, use ALTER DATABASE ... SET. Otherwise, you can set default_text_search_config in each session. - Many functions also take an optional configuration name. + + + + Each text search function that depends on a configuration has an optional + regconfig argument, so that the configuration to use can be + specified explicitly. default_text_search_config + is used only when this argument is omitted. @@ -369,7 +381,7 @@ text @@ text - Text search configurations specify a parser and a set + Text search configurations select a parser and a set of dictionaries to use to normalize the tokens produced by the parser. @@ -395,9 +407,9 @@ text @@ text Tables and Indexes - The previous section described how to perform full text searches using - constant strings. This section shows how to search table data, optionally - using indexes. + The examples in the previous section illustrated full text matching using + simple constant strings. This section shows how to search table data, + optionally using indexes. @@ -411,9 +423,15 @@ text @@ text SELECT title FROM pgweb -WHERE to_tsvector('english', body) @@ to_tsquery('english', 'friend') +WHERE to_tsvector('english', body) @@ to_tsquery('english', 'friend'); + This will also find related words such as friends + and friendly, since all these are reduced to the same + normalized lexeme. + + + The query above specifies that the english configuration is to be used to parse and normalize the strings. Alternatively we could omit the configuration parameters: @@ -421,11 +439,15 @@ WHERE to_tsvector('english', body) @@ to_tsquery('english', 'friend') SELECT title FROM pgweb -WHERE to_tsvector(body) @@ to_tsquery('friend') +WHERE to_tsvector(body) @@ to_tsquery('friend'); This query will use the configuration set by . A more complex query is to + linkend="guc-default-text-search-config">. + + + + A more complex example is to select the ten most recent documents that contain create and table in the title or body: @@ -433,12 +455,10 @@ WHERE to_tsvector(body) @@ to_tsquery('friend') SELECT title FROM pgweb WHERE to_tsvector(title || body) @@ to_tsquery('create & table') -ORDER BY dlm DESC LIMIT 10; +ORDER BY last_mod_date DESC LIMIT 10; - dlm is the last-modified date so we - used ORDER BY dlm LIMIT 10 to get the ten most recent - matches. For clarity we omitted the COALESCE function + For clarity we omitted the coalesce function which would be needed to search rows that contain NULL in one of the two fields. @@ -446,7 +466,7 @@ ORDER BY dlm DESC LIMIT 10; Although these queries will work without an index, most applications will find this approach too slow, except perhaps for occasional ad-hoc - queries. Practical use of text searching usually requires creating + searches. Practical use of text searching usually requires creating an index. @@ -486,7 +506,7 @@ CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector('english', body)); - It is possible to set up more complex expression indexes where the + It is possible to set up more complex expression indexes wherein the configuration name is specified by another column, e.g.: @@ -495,7 +515,9 @@ CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector(config_name, body)); where config_name is a column in the pgweb table. This allows mixed configurations in the same index while - recording which configuration was used for each index entry. Again, + recording which configuration was used for each index entry. This + would be useful, for example, if the document collection contained + documents in different languages. Again, queries that are to use the index must be phrased to match, e.g. WHERE to_tsvector(config_name, body) @@ 'a & b'. @@ -510,16 +532,15 @@ CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector('english', title || body)) Another approach is to create a separate tsvector column - to hold the output of to_tsvector(). This example is a + to hold the output of to_tsvector. This example is a concatenation of title and body, - with ranking information. We assign different labels to them to encode - information about the origin of each word: + using coalesce to ensure that one field will still be + indexed when the other is NULL: ALTER TABLE pgweb ADD COLUMN textsearch_index tsvector; UPDATE pgweb SET textsearch_index = - setweight(to_tsvector('english', coalesce(title,'')), 'A') || - setweight(to_tsvector('english', coalesce(body,'')),'D'); + to_tsvector('english', coalesce(title,'') || coalesce(body,'')); Then we create a GIN index to speed up the search: @@ -531,10 +552,10 @@ CREATE INDEX textsearch_idx ON pgweb USING gin(textsearch_index); Now we are ready to perform a fast full text search: -SELECT ts_rank_cd(textsearch_index, q) AS rank, title -FROM pgweb, to_tsquery('create & table') q -WHERE q @@ textsearch_index -ORDER BY rank DESC LIMIT 10; +SELECT title +FROM pgweb +WHERE to_tsquery('create & table') @@ textsearch_index +ORDER BY last_mod_date DESC LIMIT 10; @@ -543,23 +564,21 @@ ORDER BY rank DESC LIMIT 10; representation, it is necessary to create a trigger to keep the tsvector column current anytime title or body changes. - A predefined trigger function tsvector_update_trigger - is available for this, or you can write your own. - Keep in mind that, just as with expression indexes, it is important to - specify the configuration name when creating tsvector values - inside triggers, so that the column's contents are not affected by changes - to default_text_search_config. + explains how to do that. - The main advantage of this approach over an expression index is that - it is not necessary to explicitly specify the text search configuration - in queries in order to make use of the index. As in the example above, - the query can depend on default_text_search_config. - Another advantage is that searches will be faster, since - it will not be necessary to redo the to_tsvector calls - to verify index matches. (This is more important when using a GiST - index than a GIN index; see .) + One advantage of the separate-column approach over an expression index + is that it is not necessary to explicitly specify the text search + configuration in queries in order to make use of the index. As shown + in the example above, the query can depend on + default_text_search_config. Another advantage is that + searches will be faster, since it will not be necessary to redo the + to_tsvector calls to verify index matches. (This is more + important when using a GiST index than a GIN index; see .) The expression-index approach is + simpler to set up, however, and it requires less disk space since the + tsvector representation is not stored explicitly. @@ -567,31 +586,42 @@ ORDER BY rank DESC LIMIT 10; - Additional Controls + Controlling Text Search To implement full text searching there must be a function to create a tsvector from a document and a tsquery from a - user query. Also, we need to return results in some order, i.e., we need + user query. Also, we need to return results in a useful order, so we need a function that compares documents with respect to their relevance to - the tsquery. + the query. It's also important to be able to display the results nicely. PostgreSQL provides support for all of these functions. - - Parsing - - - text search - parse - + + Parsing Documents PostgreSQL provides the - function to_tsvector, which converts a document to - the tsvector data type. More details are available in , but for now consider a simple example: + function to_tsvector for converting a document to + the tsvector data type. + + + + to_tsvector + + + + to_tsvector( config regconfig, document text) returns tsvector + + + + to_tsvector parses a textual document into tokens, + reduces the tokens to lexemes, and returns a tsvector which + lists the lexemes together with their positions in the document. + The document is processed according to the specified or default + text search configuration. + Here is a simple example: SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); @@ -611,19 +641,18 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); The to_tsvector function internally calls a parser - which breaks the document text into tokens and assigns a type to - each token. The default parser recognizes 23 token types. - For each token, a list of + which breaks the document text into tokens and assigns a type to + each token. For each token, a list of dictionaries () is consulted, where the list can vary depending on the token type. The first dictionary that recognizes the token emits one or more normalized lexemes to represent the token. For example, rats became rat because one of the dictionaries recognized that the word rats is a plural - form of rat. Some words are recognized as stop - words (), which causes them to - be ignored since they occur too frequently to be useful in searching. - In our example these are + form of rat. Some words are recognized as + stop words (), which + causes them to be ignored since they occur too frequently to be useful in + searching. In our example these are a, on, and it. If no dictionary in the list recognizes the token then it is also ignored. In this example that happened to the punctuation sign - @@ -631,7 +660,7 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); (Space symbols), meaning space tokens will never be indexed. The choices of parser, dictionaries and which types of tokens to index are determined by the selected text search configuration (). It is possible to have + linkend="textsearch-configuration">). It is possible to have many different configurations in the same database, and predefined configurations are available for various languages. In our example we used the default configuration english for the @@ -639,56 +668,13 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); - As another example, below is the output from the ts_debug - function (), which shows all details - of the text search parsing machinery: - - -SELECT * FROM ts_debug('english','a fat cat sat on a mat - it ate a fat rats'); - Alias | Description | Token | Dictionaries | Lexized token --------+---------------+-------+--------------+---------------- - lword | Latin word | a | {english} | english: {} - blank | Space symbols | | | - lword | Latin word | fat | {english} | english: {fat} - blank | Space symbols | | | - lword | Latin word | cat | {english} | english: {cat} - blank | Space symbols | | | - lword | Latin word | sat | {english} | english: {sat} - blank | Space symbols | | | - lword | Latin word | on | {english} | english: {} - blank | Space symbols | | | - lword | Latin word | a | {english} | english: {} - blank | Space symbols | | | - lword | Latin word | mat | {english} | english: {mat} - blank | Space symbols | | | - blank | Space symbols | - | | - lword | Latin word | it | {english} | english: {} - blank | Space symbols | | | - lword | Latin word | ate | {english} | english: {ate} - blank | Space symbols | | | - lword | Latin word | a | {english} | english: {} - blank | Space symbols | | | - lword | Latin word | fat | {english} | english: {fat} - blank | Space symbols | | | - lword | Latin word | rats | {english} | english: {rat} - (24 rows) - - - A more extensive example of ts_debug output - appears in . - - - - The function setweight() can be used to label the + The function setweight can be used to label the entries of a tsvector with a given weight, where a weight is one of the letters A, B, C, or D. This is typically used to mark entries coming from - different parts of a document. Later, this information can be - used for ranking of search results in addition to positional information - (distance between query terms). If no ranking is required, positional - information can be removed from tsvector using the - strip() function to save space. + different parts of a document, such as title versus body. Later, this + information can be used for ranking of search results. @@ -706,108 +692,122 @@ UPDATE tt SET ti = setweight(to_tsvector(coalesce(body,'')), 'D'); - Here we have used setweight() to label the source + Here we have used setweight to label the source of each lexeme in the finished tsvector, and then merged the labeled tsvector values using the tsvector - concatenation operator ||. + concatenation operator ||. ( gives details about these + operations.) + + + + + + Parsing Queries + + + PostgreSQL provides the + functions to_tsquery and + plainto_tsquery for converting a query to + the tsquery data type. to_tsquery + offers access to more features than plainto_tsquery, + but is less forgiving about its input. + + + + to_tsquery + + + + to_tsquery( config regconfig, querytext text) returns tsquery + + + + to_tsquery creates a tsquery value from + querytext, which must consist of single tokens + separated by the boolean operators & (AND), + | (OR) and ! (NOT). These operators + can be grouped using parentheses. In other words, the input to + to_tsquery must already follow the general rules for + tsquery input, as described in . The difference is that while basic + tsquery input takes the tokens at face value, + to_tsquery normalizes each token to a lexeme using + the specified or default configuration, and discards any tokens that are + stop words according to the configuration. For example: + + +SELECT to_tsquery('english', 'The & Fat & Rats'); + to_tsquery +--------------- + 'fat' & 'rat' + + + As in basic tsquery input, weight(s) can be attached to each + lexeme to restrict it to match only tsvector lexemes of those + weight(s). For example: + + +SELECT to_tsquery('english', 'Fat | Rats:AB'); + to_tsquery +------------------ + 'fat' | 'rat':AB + + + to_tsquery can also accept single-quoted + phrases. This is primarily useful when the configuration includes a + thesaurus dictionary that may trigger on such phrases. + In the example below, a thesaurus contains the rule supernovae + stars : sn: + + +SELECT to_tsquery('''supernovae stars'' & !crab'); + to_tsquery +--------------- + 'sn' & !'crab' + + + Without quotes, to_tsquery will generate a syntax + error for tokens that are not separated by an AND or OR operator. + + + + plainto_tsquery + + + + plainto_tsquery( config regconfig, querytext text) returns tsquery + + + + plainto_tsquery transforms unformatted text + querytext to tsquery. + The text is parsed and normalized much as for to_tsvector, + then the & (AND) boolean operator is inserted + between surviving words. - The following functions allow manual parsing control. They would - not normally be used during actual text searches, but they are very - useful for debugging purposes: - - - - - - - ts_parse - - - - - ts_parse(parser, document text, OUT tokid integer, OUT token text) returns SETOF RECORD - - - - - - Parses the given document and returns a - series of records, one for each token produced by parsing. Each record - includes a tokid showing the assigned token type - and a token which is the text of the token. + Example: -SELECT * FROM ts_parse('default','123 - a number'); - tokid | token --------+-------- - 22 | 123 - 12 | - 12 | - - 1 | a - 12 | - 1 | number + SELECT plainto_tsquery('english', 'The Fat Rats'); + plainto_tsquery +----------------- + 'fat' & 'rat' - - - - - - ts_token_type - - - - - ts_token_type(parser, OUT tokid integer, OUT alias text, OUT description text) returns SETOF RECORD - - - - - - Returns a table which describes each type of token the - parser can recognize. For each token - type the table gives the integer tokid that the - parser uses to label a - token of that type, the alias that - names the token type in configuration commands, - and a short description: + Note that plainto_tsquery cannot + recognize either boolean operators or weight labels in its input: -SELECT * FROM ts_token_type('default'); - tokid | alias | description --------+--------------+----------------------------------- - 1 | lword | Latin word - 2 | nlword | Non-latin word - 3 | word | Word - 4 | email | Email - 5 | url | URL - 6 | host | Host - 7 | sfloat | Scientific notation - 8 | version | VERSION - 9 | part_hword | Part of hyphenated word - 10 | nlpart_hword | Non-latin part of hyphenated word - 11 | lpart_hword | Latin part of hyphenated word - 12 | blank | Space symbols - 13 | tag | HTML Tag - 14 | protocol | Protocol head - 15 | hword | Hyphenated word - 16 | lhword | Latin hyphenated word - 17 | nlhword | Non-latin hyphenated word - 18 | uri | URI - 19 | file | File or path name - 20 | float | Decimal notation - 21 | int | Signed integer - 22 | uint | Unsigned integer - 23 | entity | HTML Entity +SELECT plainto_tsquery('english', 'The Fat & Rats:C'); + plainto_tsquery +--------------------- + 'fat' & 'rat' & 'c' - - - - - + Here, all the input punctuation was discarded as being space symbols. @@ -817,22 +817,17 @@ SELECT * FROM ts_token_type('default'); Ranking attempts to measure how relevant documents are to a particular - query, typically by checking the number of times each search term appears - in the document and whether the search terms occur near each other. - PostgreSQL provides two predefined ranking - functions, which take into account lexical, - proximity, and structural information. However, the concept of - relevancy is vague and very application-specific. Different applications - might require additional information for ranking, e.g. document - modification time. - - - - The lexical part of ranking reflects how often the query terms appear in - the document, how close the document query terms are, and in what part of - the document they occur. Note that ranking functions that use positional - information will only work on unstripped tsvectors because stripped - tsvectors lack positional information. + query, so that when there are many matches the most relevant ones can be + shown first. PostgreSQL provides two + predefined ranking functions, which take into account lexical, proximity, + and structural information; that is, they consider how often the query + terms appear in the document, how close together the terms are in the + document, and how important is the part of the document where they occur. + However, the concept of relevancy is vague and very application-specific. + Different applications might require additional information for ranking, + e.g. document modification time. The built-in ranking functions are only + examples. You can write your own ranking functions and/or combine their + results with additional factors to fit your specific needs. @@ -848,31 +843,13 @@ SELECT * FROM ts_token_type('default'); - ts_rank( weights float4[], vector tsvector, query tsquery , normalization int4 ) returns float4 + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 - The optional weights - argument offers the ability to weigh word instances more or less - heavily depending on how you have classified them. The weights specify - how heavily to weigh each category of word: - - -{D-weight, C-weight, B-weight, A-weight} - - - If no weights are provided, - then these defaults are used: - - -{0.1, 0.2, 0.4, 1.0} - - - Often weights are used to mark words from special areas of the document, - like the title or an initial abstract, and make them more or less important - than words in the document body. + Standard ranking function. @@ -885,16 +862,23 @@ SELECT * FROM ts_token_type('default'); - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization int4 ) returns float4 + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 - This function computes the cover density ranking for - the given document vector and query, as described in Clarke, Cormack, and - Tudhope's "Relevance Ranking for One to Three Term Queries" in the - journal "Information Processing and Management", 1999. + This function computes the cover density + ranking for the given document vector and query, as described in + Clarke, Cormack, and Tudhope's "Relevance Ranking for One to Three + Term Queries" in the journal "Information Processing and Management", + 1999. + + + + This function requires positional information in its input. + Therefore it will not work on stripped tsvector + values — it will always return zero. @@ -903,15 +887,38 @@ SELECT * FROM ts_token_type('default'); + + For both these functions, + the optional weights + argument offers the ability to weigh word instances more or less + heavily depending on how they are labeled. The weight arrays specify + how heavily to weigh each category of word, in the order: + + +{D-weight, C-weight, B-weight, A-weight} + + + If no weights are provided, + then these defaults are used: + + +{0.1, 0.2, 0.4, 1.0} + + + Typically weights are used to mark words from special areas of the + document, like the title or an initial abstract, so that they can be + treated as more or less important than words in the document body. + + Since a longer document has a greater chance of containing a query term - it is reasonable to take into account document size, i.e. a hundred-word + it is reasonable to take into account document size, e.g. a hundred-word document with five instances of a search word is probably more relevant than a thousand-word document with five instances. Both ranking functions take an integer normalization option that - specifies whether a document's length should impact its rank. The integer - option controls several behaviors, so it is a bit mask: you can specify - one or more behaviors using + specifies whether and how a document's length should impact its rank. + The integer option controls several behaviors, so it is a bit mask: + you can specify one or more behaviors using | (for example, 2|4). @@ -927,12 +934,11 @@ SELECT * FROM ts_token_type('default'); - 2 divides the rank by the length itself + 2 divides the rank by the document length - 4 divides the rank by the mean harmonic distance between extents @@ -943,7 +949,8 @@ SELECT * FROM ts_token_type('default'); - 16 divides the rank by 1 + logarithm of the number of unique words in document + 16 divides the rank by 1 + the logarithm of the number + of unique words in document @@ -953,21 +960,21 @@ SELECT * FROM ts_token_type('default'); It is important to note that the ranking functions do not use any global information so it is impossible to produce a fair normalization to 1% or - 100%, as sometimes required. However, a simple technique like + 100%, as sometimes desired. However, a simple technique like rank/(rank+1) can be applied. Of course, this is just - a cosmetic change, i.e., the ordering of the search results will not change. + a cosmetic change, i.e., the ordering of the search results will not + change. - Several examples are shown below; note that the second example uses - normalized ranking: + Here is an example that selects only the ten highest-ranked matches: -SELECT title, ts_rank_cd('{0.1, 0.2, 0.4, 1.0}',textsearch, query) AS rnk +SELECT title, ts_rank_cd(textsearch, query) AS rank FROM apod, to_tsquery('neutrino|(dark & matter)') query WHERE query @@ textsearch -ORDER BY rnk DESC LIMIT 10; - title | rnk +ORDER BY rank DESC LIMIT 10; + title | rank -----------------------------------------------+---------- Neutrinos in the Sun | 3.1 The Sudbury Neutrino Detector | 2.4 @@ -979,13 +986,16 @@ ORDER BY rnk DESC LIMIT 10; Hot Gas and Dark Matter | 1.6123 Ice Fishing for Cosmic Neutrinos | 1.6 Weak Lensing Distorts the Universe | 0.818218 + -SELECT title, ts_rank_cd('{0.1, 0.2, 0.4, 1.0}',textsearch, query)/ -(ts_rank_cd('{0.1, 0.2, 0.4, 1.0}',textsearch, query) + 1) AS rnk + This is the same example using normalized ranking: + + +SELECT title, ts_rank_cd(textsearch, query)/(ts_rank_cd(textsearch, query) + 1) AS rank FROM apod, to_tsquery('neutrino|(dark & matter)') query WHERE query @@ textsearch -ORDER BY rnk DESC LIMIT 10; - title | rnk +ORDER BY rank DESC LIMIT 10; + title | rank -----------------------------------------------+------------------- Neutrinos in the Sun | 0.756097569485493 The Sudbury Neutrino Detector | 0.705882361190954 @@ -998,31 +1008,13 @@ ORDER BY rnk DESC LIMIT 10; Ice Fishing for Cosmic Neutrinos | 0.615384618911517 Weak Lensing Distorts the Universe | 0.450010798361481 - - - - The first argument in ts_rank_cd ('{0.1, 0.2, - 0.4, 1.0}') is an optional parameter which specifies the - weights for labels D, C, - B, and A used in function - setweight. These default values show that lexemes - labeled as A are ten times more important than ones - that are labeled with D. Ranking can be expensive since it requires consulting the - tsvector of all documents, which can be I/O bound and - therefore slow. Unfortunately, it is almost impossible to avoid since full - text searching in a database should work without indexes. Moreover an index can be lossy (a GiST - index, for example) so it must check documents to avoid false hits. - - - - Note that the ranking functions above are only examples. You can write - your own ranking functions and/or combine additional factors to fit your - specific needs. + tsvector of each matching document, which can be I/O bound and + therefore slow. Unfortunately, it is almost impossible to avoid since + practical queries often result in large numbers of matches. @@ -1030,45 +1022,34 @@ ORDER BY rnk DESC LIMIT 10; Highlighting Results - - headline - - To present search results it is ideal to show a part of each document and how it is related to the query. Usually, search engines show fragments of the document with marked search terms. PostgreSQL - provides a function headline that + provides a function ts_headline that implements this functionality. - + + ts_headline + - - - - - ts_headline( config_name text, document text, query tsquery , options text ) returns text - - - - - - The ts_headline function accepts a document along - with a query, and returns one or more ellipsis-separated excerpts from - the document in which terms from the query are highlighted. The - configuration to be used to parse the document can be specified by its - config_name; if none is specified, the - default_text_search_config configuration is used. - - - - - - + + ts_headline( config regconfig, document text, query tsquery , options text ) returns text + - If an options string is specified it should + ts_headline accepts a document along + with a query, and returns one or more ellipsis-separated excerpts from + the document in which terms from the query are highlighted. The + configuration to be used to parse the document can be specified by + config; if config + is omitted, the + default_text_search_config configuration is used. + + + + If an options string is specified it must consist of a comma-separated list of one or more option=value pairs. The available options are: @@ -1089,8 +1070,8 @@ ORDER BY rnk DESC LIMIT 10; - ShortWord: the minimum length of a word that begins - or ends a headline. The default + ShortWord: words of this length or less will be + dropped at the start and end of a headline. The default value of three eliminates the English articles. @@ -1113,31 +1094,41 @@ StartSel=<b>, StopSel=</b>, MaxWords=35, MinWords=15, ShortWord=3, H For example: -SELECT ts_headline('a b c', 'c'::tsquery); - headline --------------- - a b <b>c</b> +SELECT ts_headline('ts_headline accepts a document along +with a query, and returns one or more ellipsis-separated excerpts from +the document in which terms from the query are highlighted.', + to_tsquery('ellipsis & term')); + ts_headline +-------------------------------------------------------------------- + <b>ellipsis</b>-separated excerpts from + the document in which <b>terms</b> from the query are highlighted. -SELECT ts_headline('a b c', 'c'::tsquery, 'StartSel=<,StopSel=>'); - ts_headline -------------- - a b <c> +SELECT ts_headline('ts_headline accepts a document along +with a query, and returns one or more ellipsis-separated excerpts from +the document in which terms from the query are highlighted.', + to_tsquery('ellipsis & term'), + 'StartSel = <, StopSel = >'); + ts_headline +--------------------------------------------------------------- + <ellipsis>-separated excerpts from + the document in which <terms> from the query are highlighted. - headline uses the original document, not - tsvector, so it can be slow and should be used with care. - A typical mistake is to call headline for + ts_headline uses the original document, not a + tsvector summary, so it can be slow and should be used with + care. A typical mistake is to call ts_headline for every matching document when only ten documents are to be shown. SQL subselects can help; here is an example: -SELECT id,ts_headline(body,q), rank -FROM (SELECT id,body,q, ts_rank_cd (ti,q) AS rank FROM apod, to_tsquery('stars') q -WHERE ti @@ q -ORDER BY rank DESC LIMIT 10) AS foo; +SELECT id, ts_headline(body,q), rank +FROM (SELECT id, body, q, ts_rank_cd(ti,q) AS rank + FROM apod, to_tsquery('stars') q + WHERE ti @@ q + ORDER BY rank DESC LIMIT 10) AS foo; @@ -1145,6 +1136,799 @@ ORDER BY rank DESC LIMIT 10) AS foo; + + Additional Features + + + This section describes additional functions and operators that are + useful in connection with text search. + + + + Manipulating Documents + + + showed how raw textual + documents can be converted into tsvector values. + PostgreSQL also provides functions and + operators that can be used to manipulate documents that are already + in tsvector form. + + + + + + + + tsvector concatenation + + + + + tsvector || tsvector + + + + + + The tsvector concatenation operator + returns a vector which combines the lexemes and positional information + of the two vectors given as arguments. Positions and weight labels + are retained during the concatenation. + Positions appearing in the right-hand vector are offset by the largest + position mentioned in the left-hand vector, so that the result is + nearly equivalent to the result of performing to_tsvector + on the concatenation of the two original document strings. (The + equivalence is not exact, because any stop-words removed from the + end of the left-hand argument will not affect the result, whereas + they would have affected the positions of the lexemes in the + right-hand argument if textual concatenation were used.) + + + + One advantage of using concatenation in the vector form, rather than + concatenating text before applying to_tsvector, is that + you can use different configurations to parse different sections + of the document. Also, because the setweight function + marks all lexemes of the given vector the same way, it is necessary + to parse the text and do setweight before concatenating + if you want to label different parts of the document with different + weights. + + + + + + + + setweight + + + + + setweight(vector tsvector, weight "char") returns tsvector + + + + + + This function returns a copy of the input vector in which every + position has been labeled with the given weight, either + A, B, C, or + D. (D is the default for new + vectors and as such is not displayed on output.) These labels are + retained when vectors are concatenated, allowing words from different + parts of a document to be weighted differently by ranking functions. + + + + Note that weight labels apply to positions, not + lexemes. If the input vector has been stripped of + positions then setweight does nothing. + + + + + + + length(tsvector) + + + + + length(vector tsvector) returns integer + + + + + + Returns the number of lexemes stored in the vector. + + + + + + + + strip + + + + + strip(vector tsvector) returns tsvector + + + + + + Returns a vector which lists the same lexemes as the given vector, but + which lacks any position or weight information. While the returned + vector is much less useful than an unstripped vector for relevance + ranking, it will usually be much smaller. + + + + + + + + + + + Manipulating Queries + + + showed how raw textual + queries can be converted into tsquery values. + PostgreSQL also provides functions and + operators that can be used to manipulate queries that are already + in tsquery form. + + + + + + + + + tsquery && tsquery + + + + + + Returns the AND-combination of the two given queries. + + + + + + + + + + tsquery || tsquery + + + + + + Returns the OR-combination of the two given queries. + + + + + + + + + + !! tsquery + + + + + + Returns the negation (NOT) of the given query. + + + + + + + + + numnode + + + + + numnode(query tsquery) returns integer + + + + + + Returns the number of nodes (lexemes plus operators) in a + tsquery. This function is useful + to determine if the query is meaningful + (returns > 0), or contains only stop words (returns 0). + Examples: + + +SELECT numnode(plainto_tsquery('the any')); +NOTICE: query contains only stopword(s) or doesn't contain lexeme(s), ignored + numnode +--------- + 0 + +SELECT numnode('foo & bar'::tsquery); + numnode +--------- + 3 + + + + + + + + + querytree + + + + + querytree(query tsquery) returns text + + + + + + Returns the portion of a tsquery that can be used for + searching an index. This function is useful for detecting + unindexable queries, for example those containing only stop words + or only negated terms. For example: + + +SELECT querytree(to_tsquery('!defined')); + querytree +----------- + + + + + + + + + + Query Rewriting + + + ts_rewrite + + + + The ts_rewrite family of functions search a + given tsquery for occurrences of a target + subquery, and replace each occurrence with another + substitute subquery. In essence this operation is a + tsquery-specific version of substring replacement. + A target and substitute combination can be + thought of as a query rewrite rule. A collection + of such rewrite rules can be a powerful search aid. + For example, you can expand the search using synonyms + (e.g., new york, big apple, nyc, + gotham) or narrow the search to direct the user to some hot + topic. There is some overlap in functionality between this feature + and thesaurus dictionaries (). + However, you can modify a set of rewrite rules on-the-fly without + reindexing, whereas updating a thesaurus requires reindexing to be + effective. + + + + + + + + + ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery + + + + + + This form of ts_rewrite simply applies a single + rewrite rule: target + is replaced by substitute + wherever it appears in query. For example: + + +SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery); + ts_rewrite +------------ + 'b' & 'c' + + + + + + + + + + ts_rewrite(ARRAY[query tsquery, target tsquery, substitute tsquery]) returns tsquery + + + + + + Aggregate form. XXX if we choose not to remove this, it needs to + be documented better. Note it is not listed in + textsearch-functions-table at the moment. + + +CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery); +INSERT INTO aliases VALUES('a', 'c'); + +SELECT ts_rewrite(ARRAY['a & b'::tsquery, t,s]) FROM aliases; + ts_rewrite +------------ + 'b' & 'c' + + + + + + + + + + ts_rewrite (query tsquery, select text) returns tsquery + + + + + + This form of ts_rewrite accepts a starting + query and a SQL select command, which + is given as a text string. The select must yield two + columns of tsquery type. For each row of the + select result, occurrences of the first column value + (the target) are replaced by the second column value (the substitute) + within the current query value. For example: + + +CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery); +INSERT INTO aliases VALUES('a', 'c'); + +SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases'); + ts_rewrite +------------ + 'b' & 'c' + + + + + Note that when multiple rewrite rules are applied in this way, + the order of application can be important; so in practice you will + want the source query to ORDER BY some ordering key. + + + + + + + + Let's consider a real-life astronomical example. We'll expand query + supernovae using table-driven rewriting rules: + + +CREATE TABLE aliases (t tsquery primary key, s tsquery); +INSERT INTO aliases VALUES(to_tsquery('supernovae'), to_tsquery('supernovae|sn')); + +SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT * FROM aliases'); + ts_rewrite +--------------------------------- + 'crab' & ( 'supernova' | 'sn' ) + + + We can change the rewriting rules just by updating the table: + + +UPDATE aliases SET s = to_tsquery('supernovae|sn & !nebulae') WHERE t = to_tsquery('supernovae'); + +SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT * FROM aliases'); + ts_rewrite +--------------------------------------------- + 'crab' & ( 'supernova' | 'sn' & !'nebula' ) + + + + + Rewriting can be slow when there are many rewriting rules, since it + checks every rule for a possible hit. To filter out obvious non-candidate + rules we can use the containment operators for the tsquery + type. In the example below, we select only those rules which might match + the original query: + + +SELECT ts_rewrite('a & b'::tsquery, + 'SELECT t,s FROM aliases WHERE ''a & b''::tsquery @> t'); + ts_rewrite +------------ + 'b' & 'c' + + + + + + + + + Triggers for Automatic Updates + + + trigger + for updating a derived tsvector column + + + + When using a separate column to store the tsvector representation + of your documents, it is necessary to create a trigger to update the + tsvector column when the document content columns change. + Two built-in trigger functions are available for this, or you can write + your own. + + + + tsvector_update_trigger(tsvector_column_name, config_name, text_column_name , ... ) + tsvector_update_trigger_column(tsvector_column_name, config_column_name, text_column_name , ... ) + + + + These trigger functions automatically compute a tsvector + column from one or more textual columns, under the control of + parameters specified in the CREATE TRIGGER command. + An example of their use is: + + +CREATE TABLE messages ( + title text, + body text, + tsv tsvector +); + +CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE +ON messages FOR EACH ROW EXECUTE PROCEDURE +tsvector_update_trigger(tsv, 'pg_catalog.english', title, body); + +INSERT INTO messages VALUES('title here', 'the body text is here'); + +SELECT * FROM messages; + title | body | tsv +------------+-----------------------+---------------------------- + title here | the body text is here | 'bodi':4 'text':5 'titl':1 + +SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title & body'); + title | body +------------+----------------------- + title here | the body text is here + + + Having created this trigger, any change in title or + body will automatically be reflected into + tsv, without the application having to worry about it. + + + + The first trigger argument must be the name of the tsvector + column to be updated. The second argument specifies the text search + configuration to be used to perform the conversion. For + tsvector_update_trigger, the configuration name is simply + given as the second trigger argument. It must be schema-qualified as + shown above, so that the trigger behavior will not change with changes + in search_path. For + tsvector_update_trigger_column, the second trigger argument + is the name of another table column, which must be of type + regconfig. This allows a per-row selection of configuration + to be made. The remaining argument(s) are the names of textual columns + (of type text, varchar, or char). These + will be included in the document in the order given. NULL values will + be skipped (but the other columns will still be indexed). + + + + A limitation of the built-in triggers is that they treat all the + input columns alike. To process columns differently — for + example, to weight title differently from body — it is necessary + to write a custom trigger. Here is an example using + PL/pgSQL as the trigger language: + + +CREATE FUNCTION messages_trigger() RETURNS trigger AS $$ +begin + new.tsv := + setweight(to_tsvector('pg_catalog.english', coalesce(new.title,'')), 'A') || + setweight(to_tsvector('pg_catalog.english', coalesce(new.body,'')), 'D'); + return new; +end +$$ LANGUAGE plpgsql; + +CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE +ON messages FOR EACH ROW EXECUTE PROCEDURE messages_trigger(); + + + + + Keep in mind that it is important to specify the configuration name + explicitly when creating tsvector values inside triggers, + so that the column's contents will not be affected by changes to + default_text_search_config. Failure to do this is likely to + lead to problems such as search results changing after a dump and reload. + + + + + + Gathering Document Statistics + + + ts_stat + + + + The function ts_stat is useful for checking your + configuration and for finding stop-word candidates. + + + + ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) returns setof record + + + + sqlquery is a text value containing a SQL + query which must return a single tsvector column. + ts_stat executes the query and returns statistics about + each distinct lexeme (word) contained in the tsvector + data. The columns returned are + + + + + word text — the value of a lexeme + + + + + ndoc integer — number of documents + (tsvectors) the word occurred in + + + + + nentry integer — total number of + occurrences of the word + + + + + If weights is supplied, only occurrences + having one of those weights are counted. + + + + For example, to find the ten most frequent words in a document collection: + + +SELECT * FROM ts_stat('SELECT vector FROM apod') +ORDER BY nentry DESC, ndoc DESC, word +LIMIT 10; + + + The same, but counting only word occurrences with weight A + or B: + + +SELECT * FROM ts_stat('SELECT vector FROM apod', 'ab') +ORDER BY nentry DESC, ndoc DESC, word +LIMIT 10; + + + + + + + + + Parsers + + + Text search parsers are responsible for splitting raw document text + into tokens and identifying each token's type, where + the set of possible types is defined by the parser itself. + Note that a parser does not modify the text at all — it simply + identifies plausible word boundaries. Because of this limited scope, + there is less need for application-specific custom parsers than there is + for custom dictionaries. At present PostgreSQL + provides just one built-in parser, which has been found to be useful for a + wide range of applications. + + + + The built-in parser is named pg_catalog.default. + It recognizes 23 token types: + + + + Default Parser's Token Types + + + + Alias + Description + Example + + + + + lword + Latin word (only ASCII letters) + foo + + + nlword + Non-latin word (only non-ASCII letters) + + + + word + Word (other cases) + beta1 + + + lhword + Latin hyphenated word + foo-bar + + + nlhword + Non-latin hyphenated word + + + + hword + Hyphenated word + foo-beta1 + + + lpart_hword + Latin part of hyphenated word + foo or bar in the context + foo-bar + + + nlpart_hword + Non-latin part of hyphenated word + + + + part_hword + Part of hyphenated word + beta1 in the context + foo-beta1 + + + email + Email address + foo@bar.com + + + protocol + Protocol head + http:// + + + url + URL + foo.com/stuff/index.html + + + host + Host + foo.com + + + uri + URI + /stuff/index.html, in the context of a URL + + + file + File or path name + /usr/local/foo.txt, if not within a URL + + + sfloat + Scientific notation + -1.234e56 + + + float + Decimal notation + -1.234 + + + int + Signed integer + -1234 + + + uint + Unsigned integer + 1234 + + + version + Version number + 8.3.0 + + + tag + HTML Tag + <A HREF="dictionaries.html"> + + + entity + HTML Entity + &amp; + + + blank + Space symbols + (any whitespace or punctuation not otherwise recognized) + + + +
+ + + It is possible for the parser to produce overlapping tokens from the same + piece of text. As an example, a hyphenated word will be reported both + as the entire word and as each component: + + +SELECT "Alias", "Description", "Token" FROM ts_debug('foo-bar-beta1'); + Alias | Description | Token +-------------+-------------------------------+--------------- + hword | Hyphenated word | foo-bar-beta1 + lpart_hword | Latin part of hyphenated word | foo + blank | Space symbols | - + lpart_hword | Latin part of hyphenated word | bar + blank | Space symbols | - + part_hword | Part of hyphenated word | beta1 + + + This behavior is desirable since it allows searches to work for both + the whole compound word and for components. Here is another + instructive example: + + +SELECT "Alias", "Description", "Token" FROM ts_debug('http://foo.com/stuff/index.html'); + Alias | Description | Token +----------+---------------+-------------------------- + protocol | Protocol head | http:// + url | URL | foo.com/stuff/index.html + host | Host | foo.com + uri | URI | /stuff/index.html + + + +
+ Dictionaries @@ -1239,8 +2023,9 @@ ORDER BY rank DESC LIMIT 10) AS foo; PostgreSQL provides predefined dictionaries for many languages. There are also several predefined templates that can be - used to create new dictionaries with custom parameters. If no existing - dictionary template is suitable, it is possible to create new ones; see the + used to create new dictionaries with custom parameters. Each predefined + dictionary template is described below. If no existing + template is suitable, it is possible to create new ones; see the contrib/ area of the PostgreSQL distribution for examples. @@ -1259,8 +2044,8 @@ ORDER BY rank DESC LIMIT 10) AS foo; general dictionaries, finishing with a very general dictionary, like a Snowball stemmer or simple, which recognizes everything. For example, for an astronomy-specific search - (astro_en configuration) one could bind - lword (latin word) with a synonym dictionary of astronomical + (astro_en configuration) one could bind token type + lword (Latin word) to a synonym dictionary of astronomical terms, a general English dictionary and a Snowball English stemmer: @@ -1292,15 +2077,15 @@ SELECT to_tsvector('english','in the list of stop words'); calculated for documents with and without stop words are quite different: -SELECT ts_rank_cd ('{1,1,1,1}', to_tsvector('english','in the list of stop words'), to_tsquery('list & stop')); +SELECT ts_rank_cd (to_tsvector('english','in the list of stop words'), to_tsquery('list & stop')); ts_rank_cd ------------ - 0.5 + 0.05 -SELECT ts_rank_cd ('{1,1,1,1}', to_tsvector('english','list stop words'), to_tsquery('list & stop')); +SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list & stop')); ts_rank_cd ------------ - 1 + 0.1 @@ -1471,36 +2256,37 @@ more sample word(s) : more indexed word(s) A thesaurus dictionary uses a subdictionary (which - is defined in the dictionary's configuration) to normalize the input text - before checking for phrase matches. It is only possible to select one + is specified in the dictionary's configuration) to normalize the input + text before checking for phrase matches. It is only possible to select one subdictionary. An error is reported if the subdictionary fails to - recognize a word. In that case, you should remove the use of the word or teach - the subdictionary about it. Use an asterisk (*) at the - beginning of an indexed word to skip the subdictionary. It is still required - that sample words are known. + recognize a word. In that case, you should remove the use of the word or + teach the subdictionary about it. You can place an asterisk + (*) at the beginning of an indexed word to skip applying + the subdictionary to it, but all sample words must be known + to the subdictionary. - The thesaurus dictionary looks for the longest match. + The thesaurus dictionary chooses the longest match if there are multiple + phrases matching the input, and ties are broken by using the last + definition. - Stop words recognized by the subdictionary are replaced by a stop word - placeholder to record their position. To break possible ties the thesaurus - uses the last definition. To illustrate this, consider a thesaurus (with - a simple subdictionary) with pattern - swsw, where s designates any stop word and - w, any known word: + Stop words recognized by the subdictionary are replaced by a stop + word placeholder to record their position. To illustrate this, + consider these phrases: a one the two : swsw the one a two : swsw2 - Words a and the are stop words defined in the - configuration of a subdictionary. The thesaurus considers the - one the two and that one then two as equal - and will use definition swsw2. + Assuming that a and the are stop words according + to the subdictionary, these two phrases are identical to the thesaurus: + they both look like stopword one + stopword two. Input matching this pattern + will be replaced by swsw2, according to the tie-breaking rule. @@ -1510,7 +2296,7 @@ the one a two : swsw2 accumulation. The thesaurus dictionary must be configured carefully. For example, if the thesaurus dictionary is assigned to handle only the lword token, then a thesaurus dictionary - definition like ' one 7' will not work since token type + definition like one 7 will not work since token type uint is not assigned to the thesaurus dictionary. @@ -1565,7 +2351,7 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_simple ( Now it is possible to bind the thesaurus dictionary thesaurus_simple - to the desired token types, for example: + to the desired token types in a configuration, for example: ALTER TEXT SEARCH CONFIGURATION russian @@ -1587,7 +2373,7 @@ supernovae stars : sn crab nebulae : crab - Below we create a dictionary and bind some token types with + Below we create a dictionary and bind some token types to an astronomical thesaurus and english stemmer: @@ -1632,7 +2418,7 @@ SELECT to_tsquery('''supernova star'''); Notice that supernova star matches supernovae stars in thesaurus_astro because we specified the english_stem stemmer in the thesaurus definition. - The stemmer removed the e. + The stemmer removed the e and s. @@ -1722,9 +2508,9 @@ compoundwords controlled z Here are some examples for the Norwegian language: -SELECT ts_lexize('norwegian_ispell','overbuljongterningpakkmesterassistent'); +SELECT ts_lexize('norwegian_ispell', 'overbuljongterningpakkmesterassistent'); {over,buljong,terning,pakk,mester,assistent} -SELECT ts_lexize('norwegian_ispell','sjokoladefabrikk'); +SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk'); {sjokoladefabrikk,sjokolade,fabrikk} @@ -1778,99 +2564,21 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( - - Dictionary Testing + - - The ts_lexize function facilitates dictionary testing: - - - - - - - ts_lexize - - - - - ts_lexize(dict_name text, token text) returns text[] - - - - - - Returns an array of lexemes if the input - token is known to the dictionary - dict_name, or an empty array if the token - is known to the dictionary but it is a stop word, or - NULL if it is an unknown word. - - - -SELECT ts_lexize('english_stem', 'stars'); - ts_lexize ------------ - {star} - -SELECT ts_lexize('english_stem', 'a'); - ts_lexize ------------ - {} - - - - - - - - - - The ts_lexize function expects a - token, not text. Below is an example: - - -SELECT ts_lexize('thesaurus_astro','supernovae stars') is null; - ?column? ----------- - t - - - The thesaurus dictionary thesaurus_astro does know - supernovae stars, but ts_lexize fails since it - does not parse the input text and considers it as a single token. Use - plainto_tsquery and to_tsvector to test thesaurus - dictionaries: - - -SELECT plainto_tsquery('supernovae stars'); - plainto_tsquery ------------------ - 'sn' - - - - - - Also, the ts_debug function () is helpful for testing dictionaries. - - - - - - Configuration Example + + Configuration Example A text search configuration specifies all options necessary to transform a document into a tsvector: the parser to use to break text into tokens, and the dictionaries to use to transform each token into a lexeme. Every call of - to_tsvector() or to_tsquery() + to_tsvector or to_tsquery needs a text search configuration to perform its processing. The configuration parameter - specifies the name of the current default configuration, which is the + specifies the name of the default configuration, which is the one used by text search functions if an explicit configuration parameter is omitted. It can be set in postgresql.conf, or set for an @@ -1887,13 +2595,11 @@ SELECT plainto_tsquery('supernovae stars'); As an example, we will create a configuration - pg which starts as a duplicate of the - english configuration. To be safe, we do this in a transaction: + pg, starting from a duplicate of the built-in + english configuration. -BEGIN; - -CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = english ); +CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english ); @@ -1908,7 +2614,7 @@ pgsql pg postgresql pg - We define the dictionary like this: + We define the synonym dictionary like this: CREATE TEXT SEARCH DICTIONARY pg_dict ( @@ -1918,7 +2624,7 @@ CREATE TEXT SEARCH DICTIONARY pg_dict ( Next we register the ispell dictionary - english_ispell: + english_ispell, which has its own configuration files: CREATE TEXT SEARCH DICTIONARY english_ispell ( @@ -1929,7 +2635,8 @@ CREATE TEXT SEARCH DICTIONARY english_ispell ( ); - Now modify the mappings for Latin words for configuration pg: + Now we can set up the mappings for Latin words for configuration + pg: ALTER TEXT SEARCH CONFIGURATION pg @@ -1937,7 +2644,8 @@ ALTER TEXT SEARCH CONFIGURATION pg WITH pg_dict, english_ispell, english_stem; - We do not index or search some token types: + We choose not to index or search some token types that the built-in + configuration does handle: ALTER TEXT SEARCH CONFIGURATION pg @@ -1946,11 +2654,9 @@ ALTER TEXT SEARCH CONFIGURATION pg - Now, we can test our configuration: + Now we can test our configuration: -COMMIT; - SELECT * FROM ts_debug('public.pg', ' PostgreSQL, the highly scalable, SQL compliant, open source object-relational database management system, is now undergoing beta testing of the next @@ -1977,11 +2683,331 @@ SHOW default_text_search_config; default_text_search_config ---------------------------- public.pg + + + + + + + Testing and Debugging Text Search + + + The behavior of a custom text search configuration can easily become + complicated enough to be confusing or undesirable. The functions described + in this section are useful for testing text search objects. You can + test a complete configuration, or test parsers and dictionaries separately. + + + + Configuration Testing + + + The function ts_debug allows easy testing of a + text search configuration. + + + + ts_debug + + + + ts_debug( config regconfig, document text) returns setof ts_debug + + + + ts_debug displays information about every token of + document as produced by the + parser and processed by the configured dictionaries. It uses the + configuration specified by config, + or default_text_search_config if that argument is + omitted. + + + + ts_debug's result row type is defined as: + + +CREATE TYPE ts_debug AS ( + "Alias" text, + "Description" text, + "Token" text, + "Dictionaries" regdictionary[], + "Lexized token" text +); + + + One row is produced for each token identified by the parser. + The first three columns describe the token, and the fourth lists + the dictionaries selected by the configuration for that token's type. + The last column shows the result of dictionary processing: which + dictionary (if any) recognized the token, and what it produced. + + + + Here is a simple example: + + +SELECT * FROM ts_debug('english','a fat cat sat on a mat - it ate a fat rats'); + Alias | Description | Token | Dictionaries | Lexized token +-------+---------------+-------+--------------+---------------- + lword | Latin word | a | {english} | english: {} + blank | Space symbols | | | + lword | Latin word | fat | {english} | english: {fat} + blank | Space symbols | | | + lword | Latin word | cat | {english} | english: {cat} + blank | Space symbols | | | + lword | Latin word | sat | {english} | english: {sat} + blank | Space symbols | | | + lword | Latin word | on | {english} | english: {} + blank | Space symbols | | | + lword | Latin word | a | {english} | english: {} + blank | Space symbols | | | + lword | Latin word | mat | {english} | english: {mat} + blank | Space symbols | | | + blank | Space symbols | - | | + lword | Latin word | it | {english} | english: {} + blank | Space symbols | | | + lword | Latin word | ate | {english} | english: {ate} + blank | Space symbols | | | + lword | Latin word | a | {english} | english: {} + blank | Space symbols | | | + lword | Latin word | fat | {english} | english: {fat} + blank | Space symbols | | | + lword | Latin word | rats | {english} | english: {rat} + (24 rows) + + + + + For a more extensive demonstration, we + first create a public.english configuration and + ispell dictionary for the English language: + + + +CREATE TEXT SEARCH CONFIGURATION public.english ( COPY = pg_catalog.english ); + +CREATE TEXT SEARCH DICTIONARY english_ispell ( + TEMPLATE = ispell, + DictFile = english, + AffFile = english, + StopWords = english +); + +ALTER TEXT SEARCH CONFIGURATION public.english + ALTER MAPPING FOR lword WITH english_ispell, english_stem; + + + +SELECT * FROM ts_debug('public.english','The Brightest supernovaes'); + Alias | Description | Token | Dictionaries | Lexized token +-------+---------------+-------------+-------------------------------------------------+------------------------------------- + lword | Latin word | The | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {} + blank | Space symbols | | | + lword | Latin word | Brightest | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {bright} + blank | Space symbols | | | + lword | Latin word | supernovaes | {public.english_ispell,pg_catalog.english_stem} | pg_catalog.english_stem: {supernova} +(5 rows) + + + + In this example, the word Brightest was recognized by the + parser as a Latin word (alias lword). + For this token type the dictionary list is + public.english_ispell and + pg_catalog.english_stem. The word was recognized by + public.english_ispell, which reduced it to the noun + bright. The word supernovaes is + unknown to the public.english_ispell dictionary so it + was passed to the next dictionary, and, fortunately, was recognized (in + fact, public.english_stem is a Snowball dictionary which + recognizes everything; that is why it was placed at the end of the + dictionary list). + + + + The word The was recognized by the + public.english_ispell dictionary as a stop word () and will not be indexed. + The spaces are discarded too, since the configuration provides no + dictionaries at all for them. + + + + You can reduce the volume of output by explicitly specifying which columns + you want to see: + + +SELECT "Alias", "Token", "Lexized token" +FROM ts_debug('public.english','The Brightest supernovaes'); + Alias | Token | Lexized token +-------+-------------+-------------------------------------- + lword | The | public.english_ispell: {} + blank | | + lword | Brightest | public.english_ispell: {bright} + blank | | + lword | supernovaes | pg_catalog.english_stem: {supernova} +(5 rows) + + + + + + + Parser Testing + + + The following functions allow direct testing of a text search parser. + + + + ts_parse + + + + ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) returns setof record + ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) returns setof record + + + + ts_parse parses the given document + and returns a series of records, one for each token produced by + parsing. Each record includes a tokid showing the + assigned token type and a token which is the text of the + token. For example: + + +SELECT * FROM ts_parse('default', '123 - a number'); + tokid | token +-------+-------- + 22 | 123 + 12 | + 12 | - + 1 | a + 12 | + 1 | number + + + + + ts_token_type + + + + ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) returns setof record + ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) returns setof record + + + + ts_token_type returns a table which describes each type of + token the specified parser can recognize. For each token type, the table + gives the integer tokid that the parser uses to label a + token of that type, the alias that names the token type + in configuration commands, and a short description. For + example: + + +SELECT * FROM ts_token_type('default'); + tokid | alias | description +-------+--------------+----------------------------------- + 1 | lword | Latin word + 2 | nlword | Non-latin word + 3 | word | Word + 4 | email | Email + 5 | url | URL + 6 | host | Host + 7 | sfloat | Scientific notation + 8 | version | VERSION + 9 | part_hword | Part of hyphenated word + 10 | nlpart_hword | Non-latin part of hyphenated word + 11 | lpart_hword | Latin part of hyphenated word + 12 | blank | Space symbols + 13 | tag | HTML Tag + 14 | protocol | Protocol head + 15 | hword | Hyphenated word + 16 | lhword | Latin hyphenated word + 17 | nlhword | Non-latin hyphenated word + 18 | uri | URI + 19 | file | File or path name + 20 | float | Decimal notation + 21 | int | Signed integer + 22 | uint | Unsigned integer + 23 | entity | HTML Entity + + Dictionary Testing + + + The ts_lexize function facilitates dictionary testing. + + + + ts_lexize + + + + ts_lexize(dict regdictionary, token text) returns text[] + + + + ts_lexize returns an array of lexemes if the input + token is known to the dictionary, + or an empty array if the token + is known to the dictionary but it is a stop word, or + NULL if it is an unknown word. + + + + Examples: + + +SELECT ts_lexize('english_stem', 'stars'); + ts_lexize +----------- + {star} + +SELECT ts_lexize('english_stem', 'a'); + ts_lexize +----------- + {} + + + + + + The ts_lexize function expects a single + token, not text. Here is a case + where this can be confusing: + + +SELECT ts_lexize('thesaurus_astro','supernovae stars') is null; + ?column? +---------- + t + + + The thesaurus dictionary thesaurus_astro does know the + phrase supernovae stars, but ts_lexize + fails since it does not parse the input text but treats it as a single + token. Use plainto_tsquery or to_tsvector to + test thesaurus dictionaries, for example: + + +SELECT plainto_tsquery('supernovae stars'); + plainto_tsquery +----------------- + 'sn' + + + + + + @@ -1989,10 +3015,9 @@ SHOW default_text_search_config; text search - index + indexes - There are two kinds of indexes that can be used to speed up full text searches. @@ -2004,11 +3029,6 @@ SHOW default_text_search_config; - - text search - GiST - - index GiST @@ -2032,11 +3052,6 @@ SHOW default_text_search_config; - - text search - GIN - - index GIN @@ -2060,6 +3075,11 @@ SHOW default_text_search_config; + + There are substantial performance differences between the two index types, + so it is important to understand which to use. + + A GiST index is lossy, meaning it is necessary to check the actual table row to eliminate false matches. @@ -2076,43 +3096,21 @@ EXPLAIN SELECT * FROM apod WHERE textsearch @@ to_tsquery('supernovae'); Filter: (textsearch @@ '''supernova'''::tsquery) - GiST index lossiness happens because each document is represented by a - fixed-length signature. The signature is generated by hashing (crc32) each - word into a random bit in an n-bit string and all words combine to produce - an n-bit document signature. Because of hashing there is a chance that - some words hash to the same position and could result in a false hit. - Signatures calculated for each document in a collection are stored in an - RD-tree (Russian Doll tree), invented by Hellerstein, - which is an adaptation of R-tree for sets. In our case - the transitive containment relation is realized by - superimposed coding (Knuth, 1973) of signatures, i.e., a parent is the - result of 'OR'-ing the bit-strings of all children. This is a second - factor of lossiness. It is clear that parents tend to be full of - 1s (degenerates) and become quite useless because of the - limited selectivity. Searching is performed as a bit comparison of a - signature representing the query and an RD-tree entry. - If all 1s of both signatures are in the same position we - say that this branch probably matches the query, but if there is even one - discrepancy we can definitely reject this branch. + GiST indexes are lossy because each document is represented in the + index by a fixed-length signature. The signature is generated by hashing + each word into a random bit in an n-bit string, with all these bits OR-ed + together to produce an n-bit document signature. When two words hash to + the same bit position there will be a false match, and if all words in + the query have matches (real or false) then the table row must be + retrieved to see if the match is correct. - Lossiness causes serious performance degradation since random access of - heap records is slow and limits the usefulness of GiST - indexes. The likelihood of false hits depends on several factors, like - the number of unique words, so using dictionaries to reduce this number - is recommended. - - - - Actually, this is not the whole story. GiST indexes have an optimization - for storing small tsvectors (under TOAST_INDEX_TARGET - bytes, 512 bytes by default). On leaf pages small tsvectors are stored unchanged, - while longer ones are represented by their signatures, which introduces - some lossiness. Unfortunately, the existing index API does not allow for - a return value to say whether it found an exact value (tsvector) or whether - the result needs to be checked. This is why the GiST index is - currently marked as lossy. We hope to improve this in the future. + Lossiness causes performance degradation since random access to table + records is slow; this limits the usefulness of GiST indexes. The + likelihood of false matches depends on several factors, in particular the + number of unique words, so using dictionaries to reduce this number is + recommended. @@ -2121,31 +3119,58 @@ EXPLAIN SELECT * FROM apod WHERE textsearch @@ to_tsquery('supernovae'); - There is one side-effect of the non-lossiness of a GIN index when using - query labels/weights, like 'supernovae:a'. A GIN index - has all the information necessary to determine a match, so the heap is - not accessed. However, label information is not stored in the index, - so if the query involves label weights it must access - the heap. Therefore, a special full text search operator @@@ - was created that forces the use of the heap to get information about - labels. GiST indexes are lossy so it always reads the heap and there is - no need for a special operator. In the example below, - fulltext_idx is a GIN index: + Actually, GIN indexes store only the words (lexemes) of tsvector + values, and not their weight labels. Thus, while a GIN index can be + considered non-lossy for a query that does not specify weights, it is + lossy for one that does. Thus a table row recheck is needed when using + a query that involves weights. Unfortunately, in the current design of + PostgreSQL, whether a recheck is needed is a static + property of a particular operator, and not something that can be enabled + or disabled on-the-fly depending on the values given to the operator. + To deal with this situation without imposing the overhead of rechecks + on queries that do not need them, the following approach has been + adopted: + - -EXPLAIN SELECT * FROM apod WHERE textsearch @@@ to_tsquery('supernovae:a'); - QUERY PLAN ------------------------------------------------------------------------- - Index Scan using textsearch_idx on apod (cost=0.00..12.30 rows=2 width=1469) - Index Cond: (textsearch @@@ '''supernova'':A'::tsquery) - Filter: (textsearch @@@ '''supernova'':A'::tsquery) - + + + + The standard text match operator @@ is marked as non-lossy + for GIN indexes. + + + + + An additional match operator @@@ is provided, and marked + as lossy for GIN indexes. This operator behaves exactly like + @@ otherwise. + + + + + + When a GIN index search is initiated with the @@ operator, + the index support code will throw an error if the query specifies any + weights. This protects against giving wrong answers due to failure + to recheck the weights. + + + + + + In short, you must use @@@ rather than @@ to + perform GIN index searches on queries that involve weight restrictions. + For queries that do not have weight restrictions, either operator will + work, but @@ will be faster. + This awkwardness will probably be addressed in a future release of + PostgreSQL. - In choosing which index type to use, GiST or GIN, consider these differences: + In choosing which index type to use, GiST or GIN, consider these + performance differences: + @@ -2159,7 +3184,7 @@ EXPLAIN SELECT * FROM apod WHERE textsearch @@@ to_tsquery('supernovae:a'); - GIN is about ten times slower to update than GiST + GIN indexes are about ten times slower to update than GiST @@ -2171,19 +3196,19 @@ EXPLAIN SELECT * FROM apod WHERE textsearch @@@ to_tsquery('supernovae:a'); - In summary, GIN indexes are best for static data because - the indexes are faster for lookups. For dynamic data, GiST indexes are + As a rule of thumb, GIN indexes are best for static data + because lookups are faster. For dynamic data, GiST indexes are faster to update. Specifically, GiST indexes are very good for dynamic data and fast if the number of unique words (lexemes) is - under 100,000, while GIN handles 100,000+ lexemes better - but is slower to update. + under 100,000, while GIN indexes will handle 100,000+ + lexemes better but are slower to update. Partitioning of big collections and the proper use of GiST and GIN indexes allows the implementation of very fast searches with online update. Partitioning can be done at the database level using table inheritance - and constraint_exclusion, or distributing documents over + and constraint_exclusion, or by distributing documents over servers and collecting search results using the contrib/dblink extension module. The latter is possible because ranking functions use only local information. @@ -2409,138 +3434,49 @@ Parser: "pg_catalog.default" The length of each lexeme must be less than 2K bytes - The length of a tsvector (lexemes + positions) must be less than 1 megabyte + The length of a tsvector (lexemes + positions) must be + less than 1 megabyte - The number of lexemes must be less than 264 + + The number of lexemes must be less than + 264 - Positional information must be greater than 0 and less than 16,383 + Position values in tsvector must be greater than 0 and + no more than 16,383 No more than 256 positions per lexeme - The number of nodes (lexemes + operations) in a tsquery must be less than 32,768 + The number of nodes (lexemes + operators) in a tsquery + must be less than 32,768 For comparison, the PostgreSQL 8.1 documentation - contained 10,441 unique words, a total of 335,420 words, and the most frequent - word postgresql was mentioned 6,127 times in 655 documents. + contained 10,441 unique words, a total of 335,420 words, and the most + frequent word postgresql was mentioned 6,127 times in 655 + documents. - Another example — the PostgreSQL mailing list - archives contained 910,989 unique words with 57,491,343 lexemes in 461,020 - messages. + Another example — the PostgreSQL mailing + list archives contained 910,989 unique words with 57,491,343 lexemes in + 461,020 messages. - - Debugging + + Migration from Pre-8.3 Text Search - The function ts_debug allows easy testing of a - text search configuration. - - - - ts_debug( config_name, document text) returns SETOF ts_debug - - - - ts_debug displays information about every token of - document as produced by the - parser and processed by the configured dictionaries using the configuration - specified by config_name. - - - - ts_debug's result type is defined as: - - -CREATE TYPE ts_debug AS ( - "Alias" text, - "Description" text, - "Token" text, - "Dictionaries" regdictionary[], - "Lexized token" text -); - - - - - For a demonstration of how function ts_debug works we - first create a public.english configuration and - ispell dictionary for the English language: - - - -CREATE TEXT SEARCH CONFIGURATION public.english ( COPY = pg_catalog.english ); - -CREATE TEXT SEARCH DICTIONARY english_ispell ( - TEMPLATE = ispell, - DictFile = english, - AffFile = english, - StopWords = english -); - -ALTER TEXT SEARCH CONFIGURATION public.english - ALTER MAPPING FOR lword WITH english_ispell, english_stem; - - - -SELECT * FROM ts_debug('public.english','The Brightest supernovaes'); - Alias | Description | Token | Dictionaries | Lexized token --------+---------------+-------------+-------------------------------------------------+------------------------------------- - lword | Latin word | The | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {} - blank | Space symbols | | | - lword | Latin word | Brightest | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {bright} - blank | Space symbols | | | - lword | Latin word | supernovaes | {public.english_ispell,pg_catalog.english_stem} | pg_catalog.english_stem: {supernova} -(5 rows) - - - - In this example, the word Brightest was recognized by the - parser as a Latin word (alias lword). - For this token type the dictionary list is - public.english_ispell and - pg_catalog.english_stem. The word was recognized by - public.english_ispell, which reduced it to the noun - bright. The word supernovaes is unknown - to the public.english_ispell dictionary so it was passed to - the next dictionary, and, fortunately, was recognized (in fact, - public.english_stem is a Snowball dictionary which - recognizes everything; that is why it was placed at the end of the - dictionary list). - - - - The word The was recognized by public.english_ispell - dictionary as a stop word () and will not be indexed. - - - - You can always explicitly specify which columns you want to see: - - -SELECT "Alias", "Token", "Lexized token" -FROM ts_debug('public.english','The Brightest supernovaes'); - Alias | Token | Lexized token --------+-------------+-------------------------------------- - lword | The | public.english_ispell: {} - blank | | - lword | Brightest | public.english_ispell: {bright} - blank | | - lword | supernovaes | pg_catalog.english_stem: {supernova} -(5 rows) - + This needs to be written ...