node now does its own grouping of the input rows, and has no need for a
preceding GROUP node in the plan pipeline. This allows elimination of
the misnamed tuplePerGroup option for GROUP, and actually saves more code
in nodeGroup.c than it costs in nodeAgg.c, as well as being presumably
faster. Restructure the API of query_planner so that we do not commit to
using a sorted or unsorted plan in query_planner; instead grouping_planner
makes the decision. (Right now it isn't any smarter than query_planner
was, but that will change as soon as it has the option to select a hash-
based aggregation step.) Despite all the hackery, no initdb needed since
only in-memory node types changed.
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
type for runtime constraint checks, instead of misusing the parse-time
Constraint node for the purpose. Fix some damage introduced into type
coercion logic; in particular ensure that a coerced expression tree will
read out the correct result type when inspected (patch had broken some
RelabelType cases). Enforce domain NOT NULL constraints against columns
that are omitted from an INSERT.
column additions, deletions, and renames that would let a child table
get out of sync with its parent. Patch by Alvaro Herrera, with some
kibitzing by Tom Lane.
array header, and to compute sizing and alignment of array elements
the same way normal tuple access operations do --- viz, using the
tupmacs.h macros att_addlength and att_align. This makes the world
safe for arrays of cstrings or intervals, and should make it much
easier to write array-type-polymorphic functions; as examples see
the cleanups of array_out and contrib/array_iterator. By Joe Conway
and Tom Lane.
> l.mode, l.isgranted from pg_lock_info() as l(relation oid, database oid,
> backendpid int4, mode text, isgranted bool);
> ERROR: badly formatted planstring "COLUMNDEF "...
>
Reported by Neil Conway -- I never implemented readfuncs.c support for
ColumnDef or TypeName, which is needed so that views can be created on
functions returning type RECORD. Here's a patch.
Joe Conway
types for Table Functions, as previously proposed on HACKERS. Here is a
brief explanation:
1. Creates a new pg_type typtype: 'p' for pseudo type (currently either
'b' for base or 'c' for catalog, i.e. a class).
2. Creates new builtin type of typtype='p' named RECORD. This is the
first of potentially several pseudo types.
3. Modify FROM clause grammer to accept:
SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)
where m is the table alias, colname1, etc are the column names, and
type1, etc are the column types.
4. When typtype == 'p' and the function return type is RECORD, a list
of column defs is required, and when typtype != 'p', it is
disallowed.
5. A check was added to ensure that the tupdesc provide via the parser
and the actual return tupdesc match in number and type of
attributes.
When creating a function you can do:
CREATE FUNCTION foo(text) RETURNS setof RECORD ...
When using it you can do:
SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)
or
SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)
or
SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)
Included in the patches are adjustments to the regression test sql and
expected files, and documentation.
p.s.
This potentially solves (or at least improves) the issue of builtin
Table Functions. They can be bootstrapped as returning RECORD, and
we can wrap system views around them with properly specified column
defs. For example:
CREATE VIEW pg_settings AS
SELECT s.name, s.setting
FROM show_all_settings()AS s(name text, setting text);
Then we can also add the UPDATE RULE that I previously posted to
pg_settings, and have pg_settings act like a virtual table, allowing
settings to be queried and set.
Joe Conway
Implements between (symmetric / asymmetric) as a node.
Executes the left or right expression once, makes a Const out of the
resulting Datum and executes the >=, <= portions out of the Const sets.
Of course, the parser does a fair amount of preparatory work for this to
happen.
Rod Taylor
Reused the Expr node to hold DISTINCT which strongly resembles
the existing OP info. Define DISTINCT_EXPR which strongly resembles
the existing OPER_EXPR opType, but with handling for NULLs required
by SQL99.
We have explicit support for single-element DISTINCT comparisons
all the way through to the executor. But, multi-element DISTINCTs
are handled by expanding into a comparison tree in gram.y as is done for
other row comparisons. Per discussions, it might be desirable to move
this into one or more purpose-built nodes to be handled in the backend.
Define the optional ROW keyword and token per SQL99.
This allows single-element row constructs, which were formerly disallowed
due to shift/reduce conflicts with parenthesized a_expr clauses.
Define the SQL99 TREAT() function. Currently, use as a synonym for CAST().
returns-set boolean field in Func and Oper nodes. This allows cleaner,
more reliable tests for expressions returning sets in the planner and
parser. For example, a WHERE clause returning a set is now detected
and complained of in the parser, not only at runtime.
some kibitzing from Tom Lane. Not everything works yet, and there's
no documentation or regression test, but let's commit this so Joe
doesn't need to cope with tracking changes in so many files ...
lists to join RTEs, attach a list of Vars and COALESCE expressions that will
replace the join's alias variables during planning. This simplifies
flatten_join_alias_vars while still making it easy to fix up varno references
when transforming the query tree. Add regression test cases for interactions
of subqueries with outer joins.
entries, per pghackers discussion. This fixes aggregates to live in
namespaces, and also simplifies/speeds up lookup in parse_func.c.
Also, add a 'proimplicit' flag to pg_proc that controls whether a type
coercion function may be invoked implicitly, or only explicitly. The
current settings of these flags are more permissive than I would like,
but we will need to debate and refine the behavior; for now, I avoided
breaking regression tests as much as I could.
addRangeTableEntry calls. Remove relname field from RTEs, since
it will no longer be a useful unique identifier of relations;
we want to encourage people to rely on the relation OID instead.
Further work on dumping qual expressions in EXPLAIN, too.
the parsetree representation. As yet we don't *do* anything with schema
names, just drop 'em on the floor; but you can enter schema-compatible
command syntax, and there's even a primitive CREATE SCHEMA command.
No doc updates yet, except to note that you can now extract a field
from a function-returning-row's result with (foo(...)).fieldname.
now has an RTE of its own, and references to its outputs now are Vars
referencing the JOIN RTE, rather than CASE-expressions. This allows
reverse-listing in ruleutils.c to use the correct alias easily, rather
than painfully reverse-engineering the alias namespace as it used to do.
Also, nested FULL JOINs work correctly, because the result of the inner
joins are simple Vars that the planner can cope with. This fixes a bug
reported a couple times now, notably by Tatsuo on 18-Nov-01. The alias
Vars are expanded into COALESCE expressions where needed at the very end
of planning, rather than during parsing.
Also, beginnings of support for showing plan qualifier expressions in
EXPLAIN. There are probably still cases that need work.
initdb forced due to change of stored-rule representation.
report for each received SQL command, regardless of rewriting activity.
Also ensure that this report comes from the 'original' command, not the
last command generated by rewrite; this fixes 7.2 breakage for INSERT
commands that have actions added by rules. Fernando Nasser and Tom Lane.
tests to return the correct results per SQL9x when given NULL inputs.
Reimplement these tests as well as IS [NOT] NULL to have their own
expression node types, instead of depending on special functions.
From Joe Conway, with a little help from Tom Lane.
of costsize.c routines to pass Query root, so that costsize can figure
more things out by itself and not be so dependent on its callers to tell
it everything it needs to know. Use selectivity of hash or merge clause
to estimate number of tuples processed internally in these joins
(this is more useful than it would've been before, since eqjoinsel is
somewhat more accurate than before).
create_index_paths are not immediately discarded, but are available for
subsequent planner work. This allows avoiding redundant syscache lookups
in several places. Change interface to operator selectivity estimation
procedures to allow faster and more flexible estimation.
Initdb forced due to change of pg_proc entries for selectivity functions!
a separate statement (though it can still be invoked as part of VACUUM, too).
pg_statistic redesigned to be more flexible about what statistics are
stored. ANALYZE now collects a list of several of the most common values,
not just one, plus a histogram (not just the min and max values). Random
sampling is used to make the process reasonably fast even on very large
tables. The number of values and histogram bins collected is now
user-settable via an ALTER TABLE command.
There is more still to do; the new stats are not being used everywhere
they could be in the planner. But the remaining changes for this project
should be localized, and the behavior is already better than before.
A not-very-related change is that sorting now makes use of btree comparison
routines if it can find one, rather than invoking '<' twice.
and burn. Just for added luck, change reading of CONST nodes so that
we do not need to consult pg_type rows while reading them; this means
that no database access occurs during stringToNode. This requires
changing the order in which const-node fields are written, which means
an initdb is forced.
comparison does not consider paths different when they differ only in
uninteresting aspects of sort order. (We had a special case of this
consideration for indexscans already, but generalize it to apply to
ordered join paths too.) Be stricter about what is a canonical pathkey
to allow faster pathkey comparison. Cache canonical pathkeys and
dispersion stats for left and right sides of a RestrictInfo's clause,
to avoid repeated computation. Total speedup will depend on number of
tables in a query, but I see about 4x speedup of planning phase for
a sample seven-table query.
avoid repeated evaluations in cost_qual_eval(). This turns out to save
a useful fraction of planning time. No change to external representation
of RestrictInfo --- although that node type doesn't appear in stored
rules anyway.
joins, and clean things up a good deal at the same time. Append plan node
no longer hacks on rangetable at runtime --- instead, all child tables are
given their own RT entries during planning. Concept of multiple target
tables pushed up into execMain, replacing bug-prone implementation within
nodeAppend. Planner now supports generating Append plans for inheritance
sets either at the top of the plan (the old way) or at the bottom. Expanding
at the bottom is appropriate for tables used as sources, since they may
appear inside an outer join; but we must still expand at the top when the
target of an UPDATE or DELETE is an inheritance set, because we actually need
a different targetlist and junkfilter for each target table in that case.
Fortunately a target table can't be inside an outer join... Bizarre mutual
recursion between union_planner and prepunion.c is gone --- in fact,
union_planner doesn't really have much to do with union queries anymore,
so I renamed it grouping_planner.
* Makefile: Add more standard targets. Improve shell redirection in GNU
make detection.
* src/backend/access/transam/rmgr.c: Fix incorrect(?) C.
* src/backend/libpq/pqcomm.c (StreamConnection): Work around accept() bug.
* src/include/port/unixware.h: ...with help from here.
* src/backend/nodes/print.c (plannode_type): Remove some "break"s after
"return"s.
* src/backend/tcop/dest.c (DestToFunction): ditto.
* src/backend/nodes/readfuncs.c: Add proper prototypes.
* src/backend/utils/adt/numutils.c (pg_atoi): Cope specially with strtol()
setting EINVAL. This saves us from creating an extra set of regression test
output for the affected systems.
* src/include/storage/s_lock.h (tas): Correct prototype.
* src/interfaces/libpq/fe-connect.c (parseServiceInfo): Don't use variable
as dimension in array definition.
* src/makefiles/Makefile.unixware: Add support for GCC.
* src/template/unixware: same here
* src/test/regress/expected/abstime-solaris-1947.out: Adjust whitespace.
* src/test/regress/expected/horology-solaris-1947.out: Part of this file
was evidently missing.
* src/test/regress/pg_regress.sh: Fix shell. mkdir -p returns non-zero if
the directory exists.
* src/test/regress/resultmap: Add entries for Unixware.
SQL92 semantics, including support for ALL option. All three can be used
in subqueries and views. DISTINCT and ORDER BY work now in views, too.
This rewrite fixes many problems with cross-datatype UNIONs and INSERT/SELECT
where the SELECT yields different datatypes than the INSERT needs. I did
that by making UNION subqueries and SELECT in INSERT be treated like
subselects-in-FROM, thereby allowing an extra level of targetlist where the
datatype conversions can be inserted safely.
INITDB NEEDED!
(Don't forget that an alias is required.) Views reimplemented as expanding
to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually
work now (he says optimistically). No UNION support in subselects/views
yet, but I have some ideas about that. Rule-related permissions checking
moved out of rewriter and into executor.
INITDB REQUIRED!
from Param nodes, per discussion a few days ago on pghackers. Add new
expression node type FieldSelect that implements the functionality where
it's actually needed. Clean up some other unused fields in Func nodes
as well.
NOTE: initdb forced due to change in stored expression trees for rules.
There's now only one transition value and transition function.
NULL handling in aggregates is a lot cleaner. Also, use Numeric
accumulators instead of integer accumulators for sum/avg on integer
datatypes --- this avoids overflow at the cost of being a little slower.
Implement VARIANCE() and STDDEV() aggregates in the standard backend.
Also, enable new LIKE selectivity estimators by default. Unrelated
change, but as long as I had to force initdb anyway...
memory contexts. Currently, only leaks in expressions executed as
quals or projections are handled. Clean up some old dead cruft in
executor while at it --- unused fields in state nodes, that sort of thing.
materialized tupleset is small enough) instead of a temporary relation.
This was something I was thinking of doing anyway for performance, and Jan
says he needs it for TOAST because he doesn't want to cope with toasting
noname relations. With this change, the 'noname table' support in heap.c
is dead code, and I have accordingly removed it. Also clean up 'noname'
plan handling in planner --- nonames are either sort or materialize plans,
and it seems less confusing to handle them separately under those names.
costs using the inner path's parent->rows count as the number of tuples
processed per inner scan iteration. This is wrong when we are using an
inner indexscan with indexquals based on join clauses, because the rows
count in a Relation node reflects the selectivity of the restriction
clauses for that rel only. Upshot was that if join clause was very
selective, we'd drastically overestimate the true cost of the join.
Fix is to calculate correct output-rows estimate for an inner indexscan
when the IndexPath node is created and save it in the path node.
Change of path node doesn't require initdb, since path nodes don't
appear in saved rules.