Amit Kapila 7259736a6e Implement streaming mode in ReorderBuffer.
Instead of serializing the transaction to disk after reaching the
logical_decoding_work_mem limit in memory, we consume the changes we have
in memory and invoke stream API methods added by commit 45fdc9738b.
However, sometimes if we have incomplete toast or speculative insert we
spill to the disk because we can't generate the complete tuple and stream.
And, as soon as we get the complete tuple we stream the transaction
including the serialized changes.

We can do this incremental processing thanks to having assignments
(associating subxact with toplevel xacts) in WAL right away, and
thanks to logging the invalidation messages at each command end. These
features are added by commits 0bead9af48 and c55040ccd0 respectively.

Now that we can stream in-progress transactions, the concurrent aborts
may cause failures when the output plugin consults catalogs (both system
and user-defined).

We handle such failures by returning ERRCODE_TRANSACTION_ROLLBACK
sqlerrcode from system table scan APIs to the backend or WALSender
decoding a specific uncommitted transaction. The decoding logic on the
receipt of such a sqlerrcode aborts the decoding of the current
transaction and continue with the decoding of other transactions.

We have ReorderBufferTXN pointer in each ReorderBufferChange by which we
know which xact it belongs to.  The output plugin can use this to decide
which changes to discard in case of stream_abort_cb (e.g. when a subxact
gets discarded).

We also provide a new option via SQL APIs to fetch the changes being
streamed.

Author: Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke
Reviewed-by: Amit Kapila, Kuntal Ghosh, Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2020-08-08 07:47:06 +05:30
..
2020-08-06 16:23:52 -07:00
2020-01-01 12:21:45 -05:00
2020-01-01 12:21:45 -05:00
2020-06-09 10:41:41 +02:00
2020-04-13 11:55:45 -04:00

The PostgreSQL contrib tree
---------------------------

This subtree contains porting tools, analysis utilities, and plug-in
features that are not part of the core PostgreSQL system, mainly
because they address a limited audience or are too experimental to be
part of the main source tree.  This does not preclude their
usefulness.

User documentation for each module appears in the main SGML
documentation.

When building from the source distribution, these modules are not
built automatically, unless you build the "world" target.  You can
also build and install them all by running "make all" and "make
install" in this directory; or to build and install just one selected
module, do the same in that module's subdirectory.

Some directories supply new user-defined functions, operators, or
types.  To make use of one of these modules, after you have installed
the code you need to register the new SQL objects in the database
system by executing a CREATE EXTENSION command.  In a fresh database,
you can simply do

    CREATE EXTENSION module_name;

See the PostgreSQL documentation for more information about this
procedure.