Add parallel pg_dump option.
New infrastructure is added which creates a set number of workers (threads on Windows, forked processes on Unix). Jobs are then handed out to these workers by the master process as needed. pg_restore is adjusted to use this new infrastructure in place of the old setup which created a new worker for each step on the fly. Parallel dumps acquire a snapshot clone in order to stay consistent, if available. The parallel option is selected by the -j / --jobs command line parameter of pg_dump. Joachim Wieland, lightly editorialized by Andrew Dunstan.
This commit is contained in:
parent
3b91fe185a
commit
9e257a181c
@ -310,6 +310,24 @@ pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable c
|
|||||||
with one of the other two approaches.
|
with one of the other two approaches.
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
|
<formalpara>
|
||||||
|
<title>Use <application>pg_dump</>'s parallel dump feature.</title>
|
||||||
|
<para>
|
||||||
|
To speed up the dump of a large database, you can use
|
||||||
|
<application>pg_dump</application>'s parallel mode. This will dump
|
||||||
|
multiple tables at the same time. You can control the degree of
|
||||||
|
parallelism with the <command>-j</command> parameter. Parallel dumps
|
||||||
|
are only supported for the "directory" archive format.
|
||||||
|
|
||||||
|
<programlisting>
|
||||||
|
pg_dump -j <replaceable class="parameter">num</replaceable> -F d -f <replaceable class="parameter">out.dir</replaceable> <replaceable class="parameter">dbname</replaceable>
|
||||||
|
</programlisting>
|
||||||
|
|
||||||
|
You can use <command>pg_restore -j</command> to restore a dump in parallel.
|
||||||
|
This will work for any archive of either the "custom" or the "directory"
|
||||||
|
archive mode, whether or not it has been created with <command>pg_dump -j</command>.
|
||||||
|
</para>
|
||||||
|
</formalpara>
|
||||||
</sect2>
|
</sect2>
|
||||||
</sect1>
|
</sect1>
|
||||||
|
|
||||||
|
@ -1433,6 +1433,15 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
|
|||||||
base backup.
|
base backup.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Experiment with the parallel dump and restore modes of both
|
||||||
|
<application>pg_dump</> and <application>pg_restore</> and find the
|
||||||
|
optimal number of concurrent jobs to use. Dumping and restoring in
|
||||||
|
parallel by means of the <option>-j</> option should give you a
|
||||||
|
significantly higher performance over the serial mode.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Consider whether the whole dump should be restored as a single
|
Consider whether the whole dump should be restored as a single
|
||||||
|
@ -73,10 +73,12 @@ PostgreSQL documentation
|
|||||||
transfer mechanism. <application>pg_dump</application> can be used to
|
transfer mechanism. <application>pg_dump</application> can be used to
|
||||||
backup an entire database, then <application>pg_restore</application>
|
backup an entire database, then <application>pg_restore</application>
|
||||||
can be used to examine the archive and/or select which parts of the
|
can be used to examine the archive and/or select which parts of the
|
||||||
database are to be restored. The most flexible output file format is
|
database are to be restored. The most flexible output file formats are
|
||||||
the <quote>custom</quote> format (<option>-Fc</option>). It allows
|
the <quote>custom</quote> format (<option>-Fc</option>) and the
|
||||||
for selection and reordering of all archived items, and is compressed
|
<quote>directory</quote> format(<option>-Fd</option>). They allow
|
||||||
by default.
|
for selection and reordering of all archived items, support parallel
|
||||||
|
restoration, and are compressed by default. The <quote>directory</quote>
|
||||||
|
format is the only format that supports parallel dumps.
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
<para>
|
<para>
|
||||||
@ -251,7 +253,8 @@ PostgreSQL documentation
|
|||||||
can read. A directory format archive can be manipulated with
|
can read. A directory format archive can be manipulated with
|
||||||
standard Unix tools; for example, files in an uncompressed archive
|
standard Unix tools; for example, files in an uncompressed archive
|
||||||
can be compressed with the <application>gzip</application> tool.
|
can be compressed with the <application>gzip</application> tool.
|
||||||
This format is compressed by default.
|
This format is compressed by default and also supports parallel
|
||||||
|
dumps.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
@ -285,6 +288,62 @@ PostgreSQL documentation
|
|||||||
</listitem>
|
</listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
|
|
||||||
|
<varlistentry>
|
||||||
|
<term><option>-j <replaceable class="parameter">njobs</replaceable></></term>
|
||||||
|
<term><option>--jobs=<replaceable class="parameter">njobs</replaceable></></term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Run the dump in parallel by dumping <replaceable class="parameter">njobs</replaceable>
|
||||||
|
tables simultaneously. This option reduces the time of the dump but it also
|
||||||
|
increases the load on the database server. You can only use this option with the
|
||||||
|
directory output format because this is the only output format where multiple processes
|
||||||
|
can write their data at the same time.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
<application>pg_dump</> will open <replaceable class="parameter">njobs</replaceable>
|
||||||
|
+ 1 connections to the database, so make sure your <xref linkend="guc-max-connections">
|
||||||
|
setting is high enough to accommodate all connections.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
Requesting exclusive locks on database objects while running a parallel dump could
|
||||||
|
cause the dump to fail. The reason is that the <application>pg_dump</> master process
|
||||||
|
requests shared locks on the objects that the worker processes are going to dump later
|
||||||
|
in order to
|
||||||
|
make sure that nobody deletes them and makes them go away while the dump is running.
|
||||||
|
If another client then requests an exclusive lock on a table, that lock will not be
|
||||||
|
granted but will be queued waiting for the shared lock of the master process to be
|
||||||
|
released.. Consequently any other access to the table will not be granted either and
|
||||||
|
will queue after the exclusive lock request. This includes the worker process trying
|
||||||
|
to dump the table. Without any precautions this would be a classic deadlock situation.
|
||||||
|
To detect this conflict, the <application>pg_dump</> worker process requests another
|
||||||
|
shared lock using the <literal>NOWAIT</> option. If the worker process is not granted
|
||||||
|
this shared lock, somebody else must have requested an exclusive lock in the meantime
|
||||||
|
and there is no way to continue with the dump, so <application>pg_dump</> has no choice
|
||||||
|
but to abort the dump.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
For a consistent backup, the database server needs to support synchronized snapshots,
|
||||||
|
a feature that was introduced in <productname>PostgreSQL</productname> 9.2. With this
|
||||||
|
feature, database clients can ensure they see the same dataset even though they use
|
||||||
|
different connections. <command>pg_dump -j</command> uses multiple database
|
||||||
|
connections; it connects to the database once with the master process and
|
||||||
|
once again for each worker job. Without the sychronized snapshot feature, the
|
||||||
|
different worker jobs wouldn't be guaranteed to see the same data in each connection,
|
||||||
|
which could lead to an inconsistent backup.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
If you want to run a parallel dump of a pre-9.2 server, you need to make sure that the
|
||||||
|
database content doesn't change from between the time the master connects to the
|
||||||
|
database until the last worker job has connected to the database. The easiest way to
|
||||||
|
do this is to halt any data modifying processes (DDL and DML) accessing the database
|
||||||
|
before starting the backup. You also need to specify the
|
||||||
|
<option>--no-synchronized-snapshots</option> parameter when running
|
||||||
|
<command>pg_dump -j</command> against a pre-9.2 <productname>PostgreSQL</productname>
|
||||||
|
server.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
|
||||||
<varlistentry>
|
<varlistentry>
|
||||||
<term><option>-n <replaceable class="parameter">schema</replaceable></option></term>
|
<term><option>-n <replaceable class="parameter">schema</replaceable></option></term>
|
||||||
<term><option>--schema=<replaceable class="parameter">schema</replaceable></option></term>
|
<term><option>--schema=<replaceable class="parameter">schema</replaceable></option></term>
|
||||||
@ -690,6 +749,17 @@ PostgreSQL documentation
|
|||||||
</listitem>
|
</listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
|
|
||||||
|
<varlistentry>
|
||||||
|
<term><option>--no-synchronized-snapshots</></term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
This option allows running <command>pg_dump -j</> against a pre-9.2
|
||||||
|
server, see the documentation of the <option>-j</option> parameter
|
||||||
|
for more details.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
|
||||||
<varlistentry>
|
<varlistentry>
|
||||||
<term><option>--no-tablespaces</option></term>
|
<term><option>--no-tablespaces</option></term>
|
||||||
<listitem>
|
<listitem>
|
||||||
@ -1082,6 +1152,15 @@ CREATE DATABASE foo WITH TEMPLATE template0;
|
|||||||
</screen>
|
</screen>
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
To dump a database into a directory-format archive in parallel with
|
||||||
|
5 worker jobs:
|
||||||
|
|
||||||
|
<screen>
|
||||||
|
<prompt>$</prompt> <userinput>pg_dump -Fd mydb -j 5 -f dumpdir</userinput>
|
||||||
|
</screen>
|
||||||
|
</para>
|
||||||
|
|
||||||
<para>
|
<para>
|
||||||
To reload an archive file into a (freshly created) database named
|
To reload an archive file into a (freshly created) database named
|
||||||
<literal>newdb</>:
|
<literal>newdb</>:
|
||||||
|
@ -19,7 +19,7 @@ include $(top_builddir)/src/Makefile.global
|
|||||||
override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
|
override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
|
||||||
|
|
||||||
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
|
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
|
||||||
pg_backup_null.o pg_backup_tar.o \
|
pg_backup_null.o pg_backup_tar.o parallel.o \
|
||||||
pg_backup_directory.o dumputils.o compress_io.o $(WIN32RES)
|
pg_backup_directory.o dumputils.o compress_io.o $(WIN32RES)
|
||||||
|
|
||||||
KEYWRDOBJS = keywords.o kwlookup.o
|
KEYWRDOBJS = keywords.o kwlookup.o
|
||||||
|
@ -54,6 +54,7 @@
|
|||||||
|
|
||||||
#include "compress_io.h"
|
#include "compress_io.h"
|
||||||
#include "dumputils.h"
|
#include "dumputils.h"
|
||||||
|
#include "parallel.h"
|
||||||
|
|
||||||
/*----------------------
|
/*----------------------
|
||||||
* Compressor API
|
* Compressor API
|
||||||
@ -182,6 +183,9 @@ size_t
|
|||||||
WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
|
WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
|
||||||
const void *data, size_t dLen)
|
const void *data, size_t dLen)
|
||||||
{
|
{
|
||||||
|
/* Are we aborting? */
|
||||||
|
checkAborting(AH);
|
||||||
|
|
||||||
switch (cs->comprAlg)
|
switch (cs->comprAlg)
|
||||||
{
|
{
|
||||||
case COMPR_ALG_LIBZ:
|
case COMPR_ALG_LIBZ:
|
||||||
@ -351,6 +355,9 @@ ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF)
|
|||||||
/* no minimal chunk size for zlib */
|
/* no minimal chunk size for zlib */
|
||||||
while ((cnt = readF(AH, &buf, &buflen)))
|
while ((cnt = readF(AH, &buf, &buflen)))
|
||||||
{
|
{
|
||||||
|
/* Are we aborting? */
|
||||||
|
checkAborting(AH);
|
||||||
|
|
||||||
zp->next_in = (void *) buf;
|
zp->next_in = (void *) buf;
|
||||||
zp->avail_in = cnt;
|
zp->avail_in = cnt;
|
||||||
|
|
||||||
@ -411,6 +418,9 @@ ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF)
|
|||||||
|
|
||||||
while ((cnt = readF(AH, &buf, &buflen)))
|
while ((cnt = readF(AH, &buf, &buflen)))
|
||||||
{
|
{
|
||||||
|
/* Are we aborting? */
|
||||||
|
checkAborting(AH);
|
||||||
|
|
||||||
ahwrite(buf, 1, cnt, AH);
|
ahwrite(buf, 1, cnt, AH);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -38,6 +38,7 @@ static struct
|
|||||||
} on_exit_nicely_list[MAX_ON_EXIT_NICELY];
|
} on_exit_nicely_list[MAX_ON_EXIT_NICELY];
|
||||||
|
|
||||||
static int on_exit_nicely_index;
|
static int on_exit_nicely_index;
|
||||||
|
void (*on_exit_msg_func) (const char *modulename, const char *fmt, va_list ap) = vwrite_msg;
|
||||||
|
|
||||||
#define supports_grant_options(version) ((version) >= 70400)
|
#define supports_grant_options(version) ((version) >= 70400)
|
||||||
|
|
||||||
@ -48,11 +49,21 @@ static bool parseAclItem(const char *item, const char *type,
|
|||||||
static char *copyAclUserName(PQExpBuffer output, char *input);
|
static char *copyAclUserName(PQExpBuffer output, char *input);
|
||||||
static void AddAcl(PQExpBuffer aclbuf, const char *keyword,
|
static void AddAcl(PQExpBuffer aclbuf, const char *keyword,
|
||||||
const char *subname);
|
const char *subname);
|
||||||
|
static PQExpBuffer getThreadLocalPQExpBuffer(void);
|
||||||
|
|
||||||
#ifdef WIN32
|
#ifdef WIN32
|
||||||
|
static void shutdown_parallel_dump_utils(int code, void *unused);
|
||||||
static bool parallel_init_done = false;
|
static bool parallel_init_done = false;
|
||||||
static DWORD tls_index;
|
static DWORD tls_index;
|
||||||
static DWORD mainThreadId;
|
static DWORD mainThreadId;
|
||||||
|
|
||||||
|
static void
|
||||||
|
shutdown_parallel_dump_utils(int code, void *unused)
|
||||||
|
{
|
||||||
|
/* Call the cleanup function only from the main thread */
|
||||||
|
if (mainThreadId == GetCurrentThreadId())
|
||||||
|
WSACleanup();
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void
|
void
|
||||||
@ -61,23 +72,29 @@ init_parallel_dump_utils(void)
|
|||||||
#ifdef WIN32
|
#ifdef WIN32
|
||||||
if (!parallel_init_done)
|
if (!parallel_init_done)
|
||||||
{
|
{
|
||||||
|
WSADATA wsaData;
|
||||||
|
int err;
|
||||||
|
|
||||||
tls_index = TlsAlloc();
|
tls_index = TlsAlloc();
|
||||||
parallel_init_done = true;
|
|
||||||
mainThreadId = GetCurrentThreadId();
|
mainThreadId = GetCurrentThreadId();
|
||||||
|
err = WSAStartup(MAKEWORD(2, 2), &wsaData);
|
||||||
|
if (err != 0)
|
||||||
|
{
|
||||||
|
fprintf(stderr, _("WSAStartup failed: %d\n"), err);
|
||||||
|
exit_nicely(1);
|
||||||
|
}
|
||||||
|
on_exit_nicely(shutdown_parallel_dump_utils, NULL);
|
||||||
|
parallel_init_done = true;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Quotes input string if it's not a legitimate SQL identifier as-is.
|
* Non-reentrant but reduces memory leakage. (On Windows the memory leakage
|
||||||
*
|
* will be one buffer per thread, which is at least better than one per call).
|
||||||
* Note that the returned string must be used before calling fmtId again,
|
|
||||||
* since we re-use the same return buffer each time. Non-reentrant but
|
|
||||||
* reduces memory leakage. (On Windows the memory leakage will be one buffer
|
|
||||||
* per thread, which is at least better than one per call).
|
|
||||||
*/
|
*/
|
||||||
const char *
|
static PQExpBuffer
|
||||||
fmtId(const char *rawid)
|
getThreadLocalPQExpBuffer(void)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* The Tls code goes awry if we use a static var, so we provide for both
|
* The Tls code goes awry if we use a static var, so we provide for both
|
||||||
@ -86,9 +103,6 @@ fmtId(const char *rawid)
|
|||||||
static PQExpBuffer s_id_return = NULL;
|
static PQExpBuffer s_id_return = NULL;
|
||||||
PQExpBuffer id_return;
|
PQExpBuffer id_return;
|
||||||
|
|
||||||
const char *cp;
|
|
||||||
bool need_quotes = false;
|
|
||||||
|
|
||||||
#ifdef WIN32
|
#ifdef WIN32
|
||||||
if (parallel_init_done)
|
if (parallel_init_done)
|
||||||
id_return = (PQExpBuffer) TlsGetValue(tls_index); /* 0 when not set */
|
id_return = (PQExpBuffer) TlsGetValue(tls_index); /* 0 when not set */
|
||||||
@ -118,6 +132,23 @@ fmtId(const char *rawid)
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return id_return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Quotes input string if it's not a legitimate SQL identifier as-is.
|
||||||
|
*
|
||||||
|
* Note that the returned string must be used before calling fmtId again,
|
||||||
|
* since we re-use the same return buffer each time.
|
||||||
|
*/
|
||||||
|
const char *
|
||||||
|
fmtId(const char *rawid)
|
||||||
|
{
|
||||||
|
PQExpBuffer id_return = getThreadLocalPQExpBuffer();
|
||||||
|
|
||||||
|
const char *cp;
|
||||||
|
bool need_quotes = false;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* These checks need to match the identifier production in scan.l. Don't
|
* These checks need to match the identifier production in scan.l. Don't
|
||||||
* use islower() etc.
|
* use islower() etc.
|
||||||
@ -185,6 +216,35 @@ fmtId(const char *rawid)
|
|||||||
return id_return->data;
|
return id_return->data;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* fmtQualifiedId - convert a qualified name to the proper format for
|
||||||
|
* the source database.
|
||||||
|
*
|
||||||
|
* Like fmtId, use the result before calling again.
|
||||||
|
*
|
||||||
|
* Since we call fmtId and it also uses getThreadLocalPQExpBuffer() we cannot
|
||||||
|
* use it until we're finished with calling fmtId().
|
||||||
|
*/
|
||||||
|
const char *
|
||||||
|
fmtQualifiedId(int remoteVersion, const char *schema, const char *id)
|
||||||
|
{
|
||||||
|
PQExpBuffer id_return;
|
||||||
|
PQExpBuffer lcl_pqexp = createPQExpBuffer();
|
||||||
|
|
||||||
|
/* Suppress schema name if fetching from pre-7.3 DB */
|
||||||
|
if (remoteVersion >= 70300 && schema && *schema)
|
||||||
|
{
|
||||||
|
appendPQExpBuffer(lcl_pqexp, "%s.", fmtId(schema));
|
||||||
|
}
|
||||||
|
appendPQExpBuffer(lcl_pqexp, "%s", fmtId(id));
|
||||||
|
|
||||||
|
id_return = getThreadLocalPQExpBuffer();
|
||||||
|
|
||||||
|
appendPQExpBuffer(id_return, "%s", lcl_pqexp->data);
|
||||||
|
destroyPQExpBuffer(lcl_pqexp);
|
||||||
|
|
||||||
|
return id_return->data;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Convert a string value to an SQL string literal and append it to
|
* Convert a string value to an SQL string literal and append it to
|
||||||
@ -1315,7 +1375,7 @@ exit_horribly(const char *modulename, const char *fmt,...)
|
|||||||
va_list ap;
|
va_list ap;
|
||||||
|
|
||||||
va_start(ap, fmt);
|
va_start(ap, fmt);
|
||||||
vwrite_msg(modulename, fmt, ap);
|
on_exit_msg_func(modulename, fmt, ap);
|
||||||
va_end(ap);
|
va_end(ap);
|
||||||
|
|
||||||
exit_nicely(1);
|
exit_nicely(1);
|
||||||
|
@ -47,6 +47,8 @@ extern const char *progname;
|
|||||||
|
|
||||||
extern void init_parallel_dump_utils(void);
|
extern void init_parallel_dump_utils(void);
|
||||||
extern const char *fmtId(const char *identifier);
|
extern const char *fmtId(const char *identifier);
|
||||||
|
extern const char *fmtQualifiedId(int remoteVersion,
|
||||||
|
const char *schema, const char *id);
|
||||||
extern void appendStringLiteral(PQExpBuffer buf, const char *str,
|
extern void appendStringLiteral(PQExpBuffer buf, const char *str,
|
||||||
int encoding, bool std_strings);
|
int encoding, bool std_strings);
|
||||||
extern void appendStringLiteralConn(PQExpBuffer buf, const char *str,
|
extern void appendStringLiteralConn(PQExpBuffer buf, const char *str,
|
||||||
@ -85,11 +87,12 @@ __attribute__((format(PG_PRINTF_ATTRIBUTE, 2, 0)));
|
|||||||
extern void
|
extern void
|
||||||
exit_horribly(const char *modulename, const char *fmt,...)
|
exit_horribly(const char *modulename, const char *fmt,...)
|
||||||
__attribute__((format(PG_PRINTF_ATTRIBUTE, 2, 3), noreturn));
|
__attribute__((format(PG_PRINTF_ATTRIBUTE, 2, 3), noreturn));
|
||||||
|
extern void (*on_exit_msg_func) (const char *modulename, const char *fmt, va_list ap)
|
||||||
|
__attribute__((format(PG_PRINTF_ATTRIBUTE, 2, 0)));
|
||||||
extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
|
extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
|
||||||
extern void exit_nicely(int code) __attribute__((noreturn));
|
extern void exit_nicely(int code) __attribute__((noreturn));
|
||||||
|
|
||||||
extern void simple_string_list_append(SimpleStringList *list, const char *val);
|
extern void simple_string_list_append(SimpleStringList *list, const char *val);
|
||||||
extern bool simple_string_list_member(SimpleStringList *list, const char *val);
|
extern bool simple_string_list_member(SimpleStringList *list, const char *val);
|
||||||
|
|
||||||
|
|
||||||
#endif /* DUMPUTILS_H */
|
#endif /* DUMPUTILS_H */
|
||||||
|
1293
src/bin/pg_dump/parallel.c
Normal file
1293
src/bin/pg_dump/parallel.c
Normal file
File diff suppressed because it is too large
Load Diff
85
src/bin/pg_dump/parallel.h
Normal file
85
src/bin/pg_dump/parallel.h
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
/*-------------------------------------------------------------------------
|
||||||
|
*
|
||||||
|
* parallel.h
|
||||||
|
*
|
||||||
|
* Parallel support header file for the pg_dump archiver
|
||||||
|
*
|
||||||
|
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
|
||||||
|
* Portions Copyright (c) 1994, Regents of the University of California
|
||||||
|
*
|
||||||
|
* The author is not responsible for loss or damages that may
|
||||||
|
* result from its use.
|
||||||
|
*
|
||||||
|
* IDENTIFICATION
|
||||||
|
* src/bin/pg_dump/parallel.h
|
||||||
|
*
|
||||||
|
*-------------------------------------------------------------------------
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "pg_backup_db.h"
|
||||||
|
|
||||||
|
struct _archiveHandle;
|
||||||
|
struct _tocEntry;
|
||||||
|
|
||||||
|
typedef enum
|
||||||
|
{
|
||||||
|
WRKR_TERMINATED = 0,
|
||||||
|
WRKR_IDLE,
|
||||||
|
WRKR_WORKING,
|
||||||
|
WRKR_FINISHED
|
||||||
|
} T_WorkerStatus;
|
||||||
|
|
||||||
|
typedef enum T_Action
|
||||||
|
{
|
||||||
|
ACT_DUMP,
|
||||||
|
ACT_RESTORE,
|
||||||
|
} T_Action;
|
||||||
|
|
||||||
|
/* Arguments needed for a worker process */
|
||||||
|
typedef struct ParallelArgs
|
||||||
|
{
|
||||||
|
struct _archiveHandle *AH;
|
||||||
|
struct _tocEntry *te;
|
||||||
|
} ParallelArgs;
|
||||||
|
|
||||||
|
/* State for each parallel activity slot */
|
||||||
|
typedef struct ParallelSlot
|
||||||
|
{
|
||||||
|
ParallelArgs *args;
|
||||||
|
T_WorkerStatus workerStatus;
|
||||||
|
int status;
|
||||||
|
int pipeRead;
|
||||||
|
int pipeWrite;
|
||||||
|
int pipeRevRead;
|
||||||
|
int pipeRevWrite;
|
||||||
|
#ifdef WIN32
|
||||||
|
uintptr_t hThread;
|
||||||
|
unsigned int threadId;
|
||||||
|
#else
|
||||||
|
pid_t pid;
|
||||||
|
#endif
|
||||||
|
} ParallelSlot;
|
||||||
|
|
||||||
|
#define NO_SLOT (-1)
|
||||||
|
|
||||||
|
typedef struct ParallelState
|
||||||
|
{
|
||||||
|
int numWorkers;
|
||||||
|
ParallelSlot *parallelSlot;
|
||||||
|
} ParallelState;
|
||||||
|
|
||||||
|
extern int GetIdleWorker(ParallelState *pstate);
|
||||||
|
extern bool IsEveryWorkerIdle(ParallelState *pstate);
|
||||||
|
extern void ListenToWorkers(struct _archiveHandle * AH, ParallelState *pstate, bool do_wait);
|
||||||
|
extern int ReapWorkerStatus(ParallelState *pstate, int *status);
|
||||||
|
extern void EnsureIdleWorker(struct _archiveHandle * AH, ParallelState *pstate);
|
||||||
|
extern void EnsureWorkersFinished(struct _archiveHandle * AH, ParallelState *pstate);
|
||||||
|
|
||||||
|
extern ParallelState *ParallelBackupStart(struct _archiveHandle * AH,
|
||||||
|
RestoreOptions *ropt);
|
||||||
|
extern void DispatchJobForTocEntry(struct _archiveHandle * AH,
|
||||||
|
ParallelState *pstate,
|
||||||
|
struct _tocEntry * te, T_Action act);
|
||||||
|
extern void ParallelBackupEnd(struct _archiveHandle * AH, ParallelState *pstate);
|
||||||
|
|
||||||
|
extern void checkAborting(struct _archiveHandle * AH);
|
@ -82,9 +82,14 @@ struct Archive
|
|||||||
int minRemoteVersion; /* allowable range */
|
int minRemoteVersion; /* allowable range */
|
||||||
int maxRemoteVersion;
|
int maxRemoteVersion;
|
||||||
|
|
||||||
|
int numWorkers; /* number of parallel processes */
|
||||||
|
char *sync_snapshot_id; /* sync snapshot id for parallel
|
||||||
|
* operation */
|
||||||
|
|
||||||
/* info needed for string escaping */
|
/* info needed for string escaping */
|
||||||
int encoding; /* libpq code for client_encoding */
|
int encoding; /* libpq code for client_encoding */
|
||||||
bool std_strings; /* standard_conforming_strings */
|
bool std_strings; /* standard_conforming_strings */
|
||||||
|
char *use_role; /* Issue SET ROLE to this */
|
||||||
|
|
||||||
/* error handling */
|
/* error handling */
|
||||||
bool exit_on_error; /* whether to exit on SQL errors... */
|
bool exit_on_error; /* whether to exit on SQL errors... */
|
||||||
@ -142,11 +147,12 @@ typedef struct _restoreOptions
|
|||||||
int suppressDumpWarnings; /* Suppress output of WARNING entries
|
int suppressDumpWarnings; /* Suppress output of WARNING entries
|
||||||
* to stderr */
|
* to stderr */
|
||||||
bool single_txn;
|
bool single_txn;
|
||||||
int number_of_jobs;
|
|
||||||
|
|
||||||
bool *idWanted; /* array showing which dump IDs to emit */
|
bool *idWanted; /* array showing which dump IDs to emit */
|
||||||
} RestoreOptions;
|
} RestoreOptions;
|
||||||
|
|
||||||
|
typedef void (*SetupWorkerPtr) (Archive *AH, RestoreOptions *ropt);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Main archiver interface.
|
* Main archiver interface.
|
||||||
*/
|
*/
|
||||||
@ -189,7 +195,8 @@ extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
|
|||||||
|
|
||||||
/* Create a new archive */
|
/* Create a new archive */
|
||||||
extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
|
extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
|
||||||
const int compression, ArchiveMode mode);
|
const int compression, ArchiveMode mode,
|
||||||
|
SetupWorkerPtr setupDumpWorker);
|
||||||
|
|
||||||
/* The --list option */
|
/* The --list option */
|
||||||
extern void PrintTOCSummary(Archive *AH, RestoreOptions *ropt);
|
extern void PrintTOCSummary(Archive *AH, RestoreOptions *ropt);
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -100,8 +100,21 @@ typedef z_stream *z_streamp;
|
|||||||
#define K_OFFSET_POS_SET 2
|
#define K_OFFSET_POS_SET 2
|
||||||
#define K_OFFSET_NO_DATA 3
|
#define K_OFFSET_NO_DATA 3
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Special exit values from worker children. We reserve 0 for normal
|
||||||
|
* success; 1 and other small values should be interpreted as crashes.
|
||||||
|
*/
|
||||||
|
#define WORKER_OK 0
|
||||||
|
#define WORKER_CREATE_DONE 10
|
||||||
|
#define WORKER_INHIBIT_DATA 11
|
||||||
|
#define WORKER_IGNORED_ERRORS 12
|
||||||
|
|
||||||
struct _archiveHandle;
|
struct _archiveHandle;
|
||||||
struct _tocEntry;
|
struct _tocEntry;
|
||||||
|
struct _restoreList;
|
||||||
|
struct ParallelArgs;
|
||||||
|
struct ParallelState;
|
||||||
|
enum T_Action;
|
||||||
|
|
||||||
typedef void (*ClosePtr) (struct _archiveHandle * AH);
|
typedef void (*ClosePtr) (struct _archiveHandle * AH);
|
||||||
typedef void (*ReopenPtr) (struct _archiveHandle * AH);
|
typedef void (*ReopenPtr) (struct _archiveHandle * AH);
|
||||||
@ -129,6 +142,13 @@ typedef void (*PrintTocDataPtr) (struct _archiveHandle * AH, struct _tocEntry *
|
|||||||
typedef void (*ClonePtr) (struct _archiveHandle * AH);
|
typedef void (*ClonePtr) (struct _archiveHandle * AH);
|
||||||
typedef void (*DeClonePtr) (struct _archiveHandle * AH);
|
typedef void (*DeClonePtr) (struct _archiveHandle * AH);
|
||||||
|
|
||||||
|
typedef char *(*WorkerJobRestorePtr) (struct _archiveHandle * AH, struct _tocEntry * te);
|
||||||
|
typedef char *(*WorkerJobDumpPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
|
||||||
|
typedef char *(*MasterStartParallelItemPtr) (struct _archiveHandle * AH, struct _tocEntry * te,
|
||||||
|
enum T_Action act);
|
||||||
|
typedef int (*MasterEndParallelItemPtr) (struct _archiveHandle * AH, struct _tocEntry * te,
|
||||||
|
const char *str, enum T_Action act);
|
||||||
|
|
||||||
typedef size_t (*CustomOutPtr) (struct _archiveHandle * AH, const void *buf, size_t len);
|
typedef size_t (*CustomOutPtr) (struct _archiveHandle * AH, const void *buf, size_t len);
|
||||||
|
|
||||||
typedef enum
|
typedef enum
|
||||||
@ -227,6 +247,13 @@ typedef struct _archiveHandle
|
|||||||
StartBlobPtr StartBlobPtr;
|
StartBlobPtr StartBlobPtr;
|
||||||
EndBlobPtr EndBlobPtr;
|
EndBlobPtr EndBlobPtr;
|
||||||
|
|
||||||
|
MasterStartParallelItemPtr MasterStartParallelItemPtr;
|
||||||
|
MasterEndParallelItemPtr MasterEndParallelItemPtr;
|
||||||
|
|
||||||
|
SetupWorkerPtr SetupWorkerPtr;
|
||||||
|
WorkerJobDumpPtr WorkerJobDumpPtr;
|
||||||
|
WorkerJobRestorePtr WorkerJobRestorePtr;
|
||||||
|
|
||||||
ClonePtr ClonePtr; /* Clone format-specific fields */
|
ClonePtr ClonePtr; /* Clone format-specific fields */
|
||||||
DeClonePtr DeClonePtr; /* Clean up cloned fields */
|
DeClonePtr DeClonePtr; /* Clean up cloned fields */
|
||||||
|
|
||||||
@ -236,6 +263,7 @@ typedef struct _archiveHandle
|
|||||||
char *archdbname; /* DB name *read* from archive */
|
char *archdbname; /* DB name *read* from archive */
|
||||||
enum trivalue promptPassword;
|
enum trivalue promptPassword;
|
||||||
char *savedPassword; /* password for ropt->username, if known */
|
char *savedPassword; /* password for ropt->username, if known */
|
||||||
|
char *use_role;
|
||||||
PGconn *connection;
|
PGconn *connection;
|
||||||
int connectToDB; /* Flag to indicate if direct DB connection is
|
int connectToDB; /* Flag to indicate if direct DB connection is
|
||||||
* required */
|
* required */
|
||||||
@ -327,6 +355,7 @@ typedef struct _tocEntry
|
|||||||
int nLockDeps; /* number of such dependencies */
|
int nLockDeps; /* number of such dependencies */
|
||||||
} TocEntry;
|
} TocEntry;
|
||||||
|
|
||||||
|
extern int parallel_restore(struct ParallelArgs * args);
|
||||||
extern void on_exit_close_archive(Archive *AHX);
|
extern void on_exit_close_archive(Archive *AHX);
|
||||||
|
|
||||||
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt,...) __attribute__((format(PG_PRINTF_ATTRIBUTE, 3, 4)));
|
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt,...) __attribute__((format(PG_PRINTF_ATTRIBUTE, 3, 4)));
|
||||||
@ -337,9 +366,13 @@ extern void WriteHead(ArchiveHandle *AH);
|
|||||||
extern void ReadHead(ArchiveHandle *AH);
|
extern void ReadHead(ArchiveHandle *AH);
|
||||||
extern void WriteToc(ArchiveHandle *AH);
|
extern void WriteToc(ArchiveHandle *AH);
|
||||||
extern void ReadToc(ArchiveHandle *AH);
|
extern void ReadToc(ArchiveHandle *AH);
|
||||||
extern void WriteDataChunks(ArchiveHandle *AH);
|
extern void WriteDataChunks(ArchiveHandle *AH, struct ParallelState *pstate);
|
||||||
|
extern void WriteDataChunksForTocEntry(ArchiveHandle *AH, TocEntry *te);
|
||||||
|
extern ArchiveHandle *CloneArchive(ArchiveHandle *AH);
|
||||||
|
extern void DeCloneArchive(ArchiveHandle *AH);
|
||||||
|
|
||||||
extern teReqs TocIDRequired(ArchiveHandle *AH, DumpId id);
|
extern teReqs TocIDRequired(ArchiveHandle *AH, DumpId id);
|
||||||
|
TocEntry *getTocEntryByDumpId(ArchiveHandle *AH, DumpId id);
|
||||||
extern bool checkSeek(FILE *fp);
|
extern bool checkSeek(FILE *fp);
|
||||||
|
|
||||||
#define appendStringLiteralAHX(buf,str,AH) \
|
#define appendStringLiteralAHX(buf,str,AH) \
|
||||||
|
@ -26,6 +26,7 @@
|
|||||||
|
|
||||||
#include "compress_io.h"
|
#include "compress_io.h"
|
||||||
#include "dumputils.h"
|
#include "dumputils.h"
|
||||||
|
#include "parallel.h"
|
||||||
|
|
||||||
/*--------
|
/*--------
|
||||||
* Routines in the format interface
|
* Routines in the format interface
|
||||||
@ -59,6 +60,10 @@ static void _LoadBlobs(ArchiveHandle *AH, bool drop);
|
|||||||
static void _Clone(ArchiveHandle *AH);
|
static void _Clone(ArchiveHandle *AH);
|
||||||
static void _DeClone(ArchiveHandle *AH);
|
static void _DeClone(ArchiveHandle *AH);
|
||||||
|
|
||||||
|
static char *_MasterStartParallelItem(ArchiveHandle *AH, TocEntry *te, T_Action act);
|
||||||
|
static int _MasterEndParallelItem(ArchiveHandle *AH, TocEntry *te, const char *str, T_Action act);
|
||||||
|
char *_WorkerJobRestoreCustom(ArchiveHandle *AH, TocEntry *te);
|
||||||
|
|
||||||
typedef struct
|
typedef struct
|
||||||
{
|
{
|
||||||
CompressorState *cs;
|
CompressorState *cs;
|
||||||
@ -127,6 +132,13 @@ InitArchiveFmt_Custom(ArchiveHandle *AH)
|
|||||||
AH->ClonePtr = _Clone;
|
AH->ClonePtr = _Clone;
|
||||||
AH->DeClonePtr = _DeClone;
|
AH->DeClonePtr = _DeClone;
|
||||||
|
|
||||||
|
AH->MasterStartParallelItemPtr = _MasterStartParallelItem;
|
||||||
|
AH->MasterEndParallelItemPtr = _MasterEndParallelItem;
|
||||||
|
|
||||||
|
/* no parallel dump in the custom archive, only parallel restore */
|
||||||
|
AH->WorkerJobDumpPtr = NULL;
|
||||||
|
AH->WorkerJobRestorePtr = _WorkerJobRestoreCustom;
|
||||||
|
|
||||||
/* Set up a private area. */
|
/* Set up a private area. */
|
||||||
ctx = (lclContext *) pg_malloc0(sizeof(lclContext));
|
ctx = (lclContext *) pg_malloc0(sizeof(lclContext));
|
||||||
AH->formatData = (void *) ctx;
|
AH->formatData = (void *) ctx;
|
||||||
@ -698,7 +710,7 @@ _CloseArchive(ArchiveHandle *AH)
|
|||||||
tpos = ftello(AH->FH);
|
tpos = ftello(AH->FH);
|
||||||
WriteToc(AH);
|
WriteToc(AH);
|
||||||
ctx->dataStart = _getFilePos(AH, ctx);
|
ctx->dataStart = _getFilePos(AH, ctx);
|
||||||
WriteDataChunks(AH);
|
WriteDataChunks(AH, NULL);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If possible, re-write the TOC in order to update the data offset
|
* If possible, re-write the TOC in order to update the data offset
|
||||||
@ -796,6 +808,80 @@ _DeClone(ArchiveHandle *AH)
|
|||||||
free(ctx);
|
free(ctx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the child of a parallel backup for the
|
||||||
|
* custom format archive and dumps the actual data.
|
||||||
|
*/
|
||||||
|
char *
|
||||||
|
_WorkerJobRestoreCustom(ArchiveHandle *AH, TocEntry *te)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* short fixed-size string + some ID so far, this needs to be malloc'ed
|
||||||
|
* instead of static because we work with threads on windows
|
||||||
|
*/
|
||||||
|
const int buflen = 64;
|
||||||
|
char *buf = (char *) pg_malloc(buflen);
|
||||||
|
ParallelArgs pargs;
|
||||||
|
int status;
|
||||||
|
|
||||||
|
pargs.AH = AH;
|
||||||
|
pargs.te = te;
|
||||||
|
|
||||||
|
status = parallel_restore(&pargs);
|
||||||
|
|
||||||
|
snprintf(buf, buflen, "OK RESTORE %d %d %d", te->dumpId, status,
|
||||||
|
status == WORKER_IGNORED_ERRORS ? AH->public.n_errors : 0);
|
||||||
|
|
||||||
|
return buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the parent process. Depending on the desired
|
||||||
|
* action (dump or restore) it creates a string that is understood by the
|
||||||
|
* _WorkerJobDump /_WorkerJobRestore functions of the dump format.
|
||||||
|
*/
|
||||||
|
static char *
|
||||||
|
_MasterStartParallelItem(ArchiveHandle *AH, TocEntry *te, T_Action act)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* A static char is okay here, even on Windows because we call this
|
||||||
|
* function only from one process (the master).
|
||||||
|
*/
|
||||||
|
static char buf[64]; /* short fixed-size string + number */
|
||||||
|
|
||||||
|
/* no parallel dump in the custom archive format */
|
||||||
|
Assert(act == ACT_RESTORE);
|
||||||
|
|
||||||
|
snprintf(buf, sizeof(buf), "RESTORE %d", te->dumpId);
|
||||||
|
|
||||||
|
return buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the parent process. It analyzes the response of
|
||||||
|
* the _WorkerJobDump / _WorkerJobRestore functions of the dump format.
|
||||||
|
*/
|
||||||
|
static int
|
||||||
|
_MasterEndParallelItem(ArchiveHandle *AH, TocEntry *te, const char *str, T_Action act)
|
||||||
|
{
|
||||||
|
DumpId dumpId;
|
||||||
|
int nBytes,
|
||||||
|
status,
|
||||||
|
n_errors;
|
||||||
|
|
||||||
|
/* no parallel dump in the custom archive */
|
||||||
|
Assert(act == ACT_RESTORE);
|
||||||
|
|
||||||
|
sscanf(str, "%u %u %u%n", &dumpId, &status, &n_errors, &nBytes);
|
||||||
|
|
||||||
|
Assert(nBytes == strlen(str));
|
||||||
|
Assert(dumpId == te->dumpId);
|
||||||
|
|
||||||
|
AH->public.n_errors += n_errors;
|
||||||
|
|
||||||
|
return status;
|
||||||
|
}
|
||||||
|
|
||||||
/*--------------------------------------------------
|
/*--------------------------------------------------
|
||||||
* END OF FORMAT CALLBACKS
|
* END OF FORMAT CALLBACKS
|
||||||
*--------------------------------------------------
|
*--------------------------------------------------
|
||||||
|
@ -309,12 +309,30 @@ ConnectDatabase(Archive *AHX,
|
|||||||
PQsetNoticeProcessor(AH->connection, notice_processor, NULL);
|
PQsetNoticeProcessor(AH->connection, notice_processor, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Close the connection to the database and also cancel off the query if we
|
||||||
|
* have one running.
|
||||||
|
*/
|
||||||
void
|
void
|
||||||
DisconnectDatabase(Archive *AHX)
|
DisconnectDatabase(Archive *AHX)
|
||||||
{
|
{
|
||||||
ArchiveHandle *AH = (ArchiveHandle *) AHX;
|
ArchiveHandle *AH = (ArchiveHandle *) AHX;
|
||||||
|
PGcancel *cancel;
|
||||||
|
char errbuf[1];
|
||||||
|
|
||||||
PQfinish(AH->connection); /* noop if AH->connection is NULL */
|
if (!AH->connection)
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (PQtransactionStatus(AH->connection) == PQTRANS_ACTIVE)
|
||||||
|
{
|
||||||
|
if ((cancel = PQgetCancel(AH->connection)))
|
||||||
|
{
|
||||||
|
PQcancel(cancel, errbuf, sizeof(errbuf));
|
||||||
|
PQfreeCancel(cancel);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
PQfinish(AH->connection);
|
||||||
AH->connection = NULL;
|
AH->connection = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -35,6 +35,7 @@
|
|||||||
|
|
||||||
#include "compress_io.h"
|
#include "compress_io.h"
|
||||||
#include "dumputils.h"
|
#include "dumputils.h"
|
||||||
|
#include "parallel.h"
|
||||||
|
|
||||||
#include <dirent.h>
|
#include <dirent.h>
|
||||||
#include <sys/stat.h>
|
#include <sys/stat.h>
|
||||||
@ -50,6 +51,7 @@ typedef struct
|
|||||||
cfp *dataFH; /* currently open data file */
|
cfp *dataFH; /* currently open data file */
|
||||||
|
|
||||||
cfp *blobsTocFH; /* file handle for blobs.toc */
|
cfp *blobsTocFH; /* file handle for blobs.toc */
|
||||||
|
ParallelState *pstate; /* for parallel backup / restore */
|
||||||
} lclContext;
|
} lclContext;
|
||||||
|
|
||||||
typedef struct
|
typedef struct
|
||||||
@ -70,6 +72,7 @@ static int _ReadByte(ArchiveHandle *);
|
|||||||
static size_t _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len);
|
static size_t _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len);
|
||||||
static size_t _ReadBuf(ArchiveHandle *AH, void *buf, size_t len);
|
static size_t _ReadBuf(ArchiveHandle *AH, void *buf, size_t len);
|
||||||
static void _CloseArchive(ArchiveHandle *AH);
|
static void _CloseArchive(ArchiveHandle *AH);
|
||||||
|
static void _ReopenArchive(ArchiveHandle *AH);
|
||||||
static void _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
|
static void _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
|
||||||
|
|
||||||
static void _WriteExtraToc(ArchiveHandle *AH, TocEntry *te);
|
static void _WriteExtraToc(ArchiveHandle *AH, TocEntry *te);
|
||||||
@ -82,8 +85,17 @@ static void _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
|
|||||||
static void _EndBlobs(ArchiveHandle *AH, TocEntry *te);
|
static void _EndBlobs(ArchiveHandle *AH, TocEntry *te);
|
||||||
static void _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt);
|
static void _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt);
|
||||||
|
|
||||||
static char *prependDirectory(ArchiveHandle *AH, const char *relativeFilename);
|
static void _Clone(ArchiveHandle *AH);
|
||||||
|
static void _DeClone(ArchiveHandle *AH);
|
||||||
|
|
||||||
|
static char *_MasterStartParallelItem(ArchiveHandle *AH, TocEntry *te, T_Action act);
|
||||||
|
static int _MasterEndParallelItem(ArchiveHandle *AH, TocEntry *te,
|
||||||
|
const char *str, T_Action act);
|
||||||
|
static char *_WorkerJobRestoreDirectory(ArchiveHandle *AH, TocEntry *te);
|
||||||
|
static char *_WorkerJobDumpDirectory(ArchiveHandle *AH, TocEntry *te);
|
||||||
|
|
||||||
|
static void setFilePath(ArchiveHandle *AH, char *buf,
|
||||||
|
const char *relativeFilename);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Init routine required by ALL formats. This is a global routine
|
* Init routine required by ALL formats. This is a global routine
|
||||||
@ -110,7 +122,7 @@ InitArchiveFmt_Directory(ArchiveHandle *AH)
|
|||||||
AH->WriteBufPtr = _WriteBuf;
|
AH->WriteBufPtr = _WriteBuf;
|
||||||
AH->ReadBufPtr = _ReadBuf;
|
AH->ReadBufPtr = _ReadBuf;
|
||||||
AH->ClosePtr = _CloseArchive;
|
AH->ClosePtr = _CloseArchive;
|
||||||
AH->ReopenPtr = NULL;
|
AH->ReopenPtr = _ReopenArchive;
|
||||||
AH->PrintTocDataPtr = _PrintTocData;
|
AH->PrintTocDataPtr = _PrintTocData;
|
||||||
AH->ReadExtraTocPtr = _ReadExtraToc;
|
AH->ReadExtraTocPtr = _ReadExtraToc;
|
||||||
AH->WriteExtraTocPtr = _WriteExtraToc;
|
AH->WriteExtraTocPtr = _WriteExtraToc;
|
||||||
@ -121,8 +133,14 @@ InitArchiveFmt_Directory(ArchiveHandle *AH)
|
|||||||
AH->EndBlobPtr = _EndBlob;
|
AH->EndBlobPtr = _EndBlob;
|
||||||
AH->EndBlobsPtr = _EndBlobs;
|
AH->EndBlobsPtr = _EndBlobs;
|
||||||
|
|
||||||
AH->ClonePtr = NULL;
|
AH->ClonePtr = _Clone;
|
||||||
AH->DeClonePtr = NULL;
|
AH->DeClonePtr = _DeClone;
|
||||||
|
|
||||||
|
AH->WorkerJobRestorePtr = _WorkerJobRestoreDirectory;
|
||||||
|
AH->WorkerJobDumpPtr = _WorkerJobDumpDirectory;
|
||||||
|
|
||||||
|
AH->MasterStartParallelItemPtr = _MasterStartParallelItem;
|
||||||
|
AH->MasterEndParallelItemPtr = _MasterEndParallelItem;
|
||||||
|
|
||||||
/* Set up our private context */
|
/* Set up our private context */
|
||||||
ctx = (lclContext *) pg_malloc0(sizeof(lclContext));
|
ctx = (lclContext *) pg_malloc0(sizeof(lclContext));
|
||||||
@ -146,16 +164,41 @@ InitArchiveFmt_Directory(ArchiveHandle *AH)
|
|||||||
|
|
||||||
if (AH->mode == archModeWrite)
|
if (AH->mode == archModeWrite)
|
||||||
{
|
{
|
||||||
if (mkdir(ctx->directory, 0700) < 0)
|
struct stat st;
|
||||||
|
bool is_empty = false;
|
||||||
|
|
||||||
|
/* we accept an empty existing directory */
|
||||||
|
if (stat(ctx->directory, &st) == 0 && S_ISDIR(st.st_mode))
|
||||||
|
{
|
||||||
|
DIR *dir = opendir(ctx->directory);
|
||||||
|
|
||||||
|
if (dir)
|
||||||
|
{
|
||||||
|
struct dirent *d;
|
||||||
|
|
||||||
|
is_empty = true;
|
||||||
|
while ((d = readdir(dir)))
|
||||||
|
{
|
||||||
|
if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
|
||||||
|
{
|
||||||
|
is_empty = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
closedir(dir);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!is_empty && mkdir(ctx->directory, 0700) < 0)
|
||||||
exit_horribly(modulename, "could not create directory \"%s\": %s\n",
|
exit_horribly(modulename, "could not create directory \"%s\": %s\n",
|
||||||
ctx->directory, strerror(errno));
|
ctx->directory, strerror(errno));
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{ /* Read Mode */
|
{ /* Read Mode */
|
||||||
char *fname;
|
char fname[MAXPGPATH];
|
||||||
cfp *tocFH;
|
cfp *tocFH;
|
||||||
|
|
||||||
fname = prependDirectory(AH, "toc.dat");
|
setFilePath(AH, fname, "toc.dat");
|
||||||
|
|
||||||
tocFH = cfopen_read(fname, PG_BINARY_R);
|
tocFH = cfopen_read(fname, PG_BINARY_R);
|
||||||
if (tocFH == NULL)
|
if (tocFH == NULL)
|
||||||
@ -281,9 +324,9 @@ _StartData(ArchiveHandle *AH, TocEntry *te)
|
|||||||
{
|
{
|
||||||
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
|
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
|
||||||
lclContext *ctx = (lclContext *) AH->formatData;
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
char *fname;
|
char fname[MAXPGPATH];
|
||||||
|
|
||||||
fname = prependDirectory(AH, tctx->filename);
|
setFilePath(AH, fname, tctx->filename);
|
||||||
|
|
||||||
ctx->dataFH = cfopen_write(fname, PG_BINARY_W, AH->compression);
|
ctx->dataFH = cfopen_write(fname, PG_BINARY_W, AH->compression);
|
||||||
if (ctx->dataFH == NULL)
|
if (ctx->dataFH == NULL)
|
||||||
@ -308,6 +351,9 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
|
|||||||
if (dLen == 0)
|
if (dLen == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
/* Are we aborting? */
|
||||||
|
checkAborting(AH);
|
||||||
|
|
||||||
return cfwrite(data, dLen, ctx->dataFH);
|
return cfwrite(data, dLen, ctx->dataFH);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -375,8 +421,9 @@ _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
|
|||||||
_LoadBlobs(AH, ropt);
|
_LoadBlobs(AH, ropt);
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
char *fname = prependDirectory(AH, tctx->filename);
|
char fname[MAXPGPATH];
|
||||||
|
|
||||||
|
setFilePath(AH, fname, tctx->filename);
|
||||||
_PrintFileData(AH, fname, ropt);
|
_PrintFileData(AH, fname, ropt);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -386,12 +433,12 @@ _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt)
|
|||||||
{
|
{
|
||||||
Oid oid;
|
Oid oid;
|
||||||
lclContext *ctx = (lclContext *) AH->formatData;
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
char *fname;
|
char fname[MAXPGPATH];
|
||||||
char line[MAXPGPATH];
|
char line[MAXPGPATH];
|
||||||
|
|
||||||
StartRestoreBlobs(AH);
|
StartRestoreBlobs(AH);
|
||||||
|
|
||||||
fname = prependDirectory(AH, "blobs.toc");
|
setFilePath(AH, fname, "blobs.toc");
|
||||||
|
|
||||||
ctx->blobsTocFH = cfopen_read(fname, PG_BINARY_R);
|
ctx->blobsTocFH = cfopen_read(fname, PG_BINARY_R);
|
||||||
|
|
||||||
@ -474,6 +521,9 @@ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
|
|||||||
lclContext *ctx = (lclContext *) AH->formatData;
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
size_t res;
|
size_t res;
|
||||||
|
|
||||||
|
/* Are we aborting? */
|
||||||
|
checkAborting(AH);
|
||||||
|
|
||||||
res = cfwrite(buf, len, ctx->dataFH);
|
res = cfwrite(buf, len, ctx->dataFH);
|
||||||
if (res != len)
|
if (res != len)
|
||||||
exit_horribly(modulename, "could not write to output file: %s\n",
|
exit_horribly(modulename, "could not write to output file: %s\n",
|
||||||
@ -518,7 +568,12 @@ _CloseArchive(ArchiveHandle *AH)
|
|||||||
if (AH->mode == archModeWrite)
|
if (AH->mode == archModeWrite)
|
||||||
{
|
{
|
||||||
cfp *tocFH;
|
cfp *tocFH;
|
||||||
char *fname = prependDirectory(AH, "toc.dat");
|
char fname[MAXPGPATH];
|
||||||
|
|
||||||
|
setFilePath(AH, fname, "toc.dat");
|
||||||
|
|
||||||
|
/* this will actually fork the processes for a parallel backup */
|
||||||
|
ctx->pstate = ParallelBackupStart(AH, NULL);
|
||||||
|
|
||||||
/* The TOC is always created uncompressed */
|
/* The TOC is always created uncompressed */
|
||||||
tocFH = cfopen_write(fname, PG_BINARY_W, 0);
|
tocFH = cfopen_write(fname, PG_BINARY_W, 0);
|
||||||
@ -539,11 +594,25 @@ _CloseArchive(ArchiveHandle *AH)
|
|||||||
if (cfclose(tocFH) != 0)
|
if (cfclose(tocFH) != 0)
|
||||||
exit_horribly(modulename, "could not close TOC file: %s\n",
|
exit_horribly(modulename, "could not close TOC file: %s\n",
|
||||||
strerror(errno));
|
strerror(errno));
|
||||||
WriteDataChunks(AH);
|
WriteDataChunks(AH, ctx->pstate);
|
||||||
|
|
||||||
|
ParallelBackupEnd(AH, ctx->pstate);
|
||||||
}
|
}
|
||||||
AH->FH = NULL;
|
AH->FH = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Reopen the archive's file handle.
|
||||||
|
*/
|
||||||
|
static void
|
||||||
|
_ReopenArchive(ArchiveHandle *AH)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* Our TOC is in memory, our data files are opened by each child anyway as
|
||||||
|
* they are separate. We support reopening the archive by just doing
|
||||||
|
* nothing.
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* BLOB support
|
* BLOB support
|
||||||
@ -560,9 +629,9 @@ static void
|
|||||||
_StartBlobs(ArchiveHandle *AH, TocEntry *te)
|
_StartBlobs(ArchiveHandle *AH, TocEntry *te)
|
||||||
{
|
{
|
||||||
lclContext *ctx = (lclContext *) AH->formatData;
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
char *fname;
|
char fname[MAXPGPATH];
|
||||||
|
|
||||||
fname = prependDirectory(AH, "blobs.toc");
|
setFilePath(AH, fname, "blobs.toc");
|
||||||
|
|
||||||
/* The blob TOC file is never compressed */
|
/* The blob TOC file is never compressed */
|
||||||
ctx->blobsTocFH = cfopen_write(fname, "ab", 0);
|
ctx->blobsTocFH = cfopen_write(fname, "ab", 0);
|
||||||
@ -627,12 +696,16 @@ _EndBlobs(ArchiveHandle *AH, TocEntry *te)
|
|||||||
ctx->blobsTocFH = NULL;
|
ctx->blobsTocFH = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
static char *
|
* Gets a relative file name and prepends the output directory, writing the
|
||||||
prependDirectory(ArchiveHandle *AH, const char *relativeFilename)
|
* result to buf. The caller needs to make sure that buf is MAXPGPATH bytes
|
||||||
|
* big. Can't use a static char[MAXPGPATH] inside the function because we run
|
||||||
|
* multithreaded on Windows.
|
||||||
|
*/
|
||||||
|
static void
|
||||||
|
setFilePath(ArchiveHandle *AH, char *buf, const char *relativeFilename)
|
||||||
{
|
{
|
||||||
lclContext *ctx = (lclContext *) AH->formatData;
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
static char buf[MAXPGPATH];
|
|
||||||
char *dname;
|
char *dname;
|
||||||
|
|
||||||
dname = ctx->directory;
|
dname = ctx->directory;
|
||||||
@ -643,6 +716,157 @@ prependDirectory(ArchiveHandle *AH, const char *relativeFilename)
|
|||||||
strcpy(buf, dname);
|
strcpy(buf, dname);
|
||||||
strcat(buf, "/");
|
strcat(buf, "/");
|
||||||
strcat(buf, relativeFilename);
|
strcat(buf, relativeFilename);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Clone format-specific fields during parallel restoration.
|
||||||
|
*/
|
||||||
|
static void
|
||||||
|
_Clone(ArchiveHandle *AH)
|
||||||
|
{
|
||||||
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
|
|
||||||
|
AH->formatData = (lclContext *) pg_malloc(sizeof(lclContext));
|
||||||
|
memcpy(AH->formatData, ctx, sizeof(lclContext));
|
||||||
|
ctx = (lclContext *) AH->formatData;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Note: we do not make a local lo_buf because we expect at most one BLOBS
|
||||||
|
* entry per archive, so no parallelism is possible. Likewise,
|
||||||
|
* TOC-entry-local state isn't an issue because any one TOC entry is
|
||||||
|
* touched by just one worker child.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We also don't copy the ParallelState pointer (pstate), only the master
|
||||||
|
* process ever writes to it.
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
_DeClone(ArchiveHandle *AH)
|
||||||
|
{
|
||||||
|
lclContext *ctx = (lclContext *) AH->formatData;
|
||||||
|
|
||||||
|
free(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the parent process. Depending on the desired
|
||||||
|
* action (dump or restore) it creates a string that is understood by the
|
||||||
|
* _WorkerJobDump /_WorkerJobRestore functions of the dump format.
|
||||||
|
*/
|
||||||
|
static char *
|
||||||
|
_MasterStartParallelItem(ArchiveHandle *AH, TocEntry *te, T_Action act)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* A static char is okay here, even on Windows because we call this
|
||||||
|
* function only from one process (the master).
|
||||||
|
*/
|
||||||
|
static char buf[64];
|
||||||
|
|
||||||
|
if (act == ACT_DUMP)
|
||||||
|
snprintf(buf, sizeof(buf), "DUMP %d", te->dumpId);
|
||||||
|
else if (act == ACT_RESTORE)
|
||||||
|
snprintf(buf, sizeof(buf), "RESTORE %d", te->dumpId);
|
||||||
|
|
||||||
return buf;
|
return buf;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the child of a parallel backup for the
|
||||||
|
* directory archive and dumps the actual data.
|
||||||
|
*
|
||||||
|
* We are currently returning only the DumpId so theoretically we could
|
||||||
|
* make this function returning an int (or a DumpId). However, to
|
||||||
|
* facilitate further enhancements and because sooner or later we need to
|
||||||
|
* convert this to a string and send it via a message anyway, we stick with
|
||||||
|
* char *. It is parsed on the other side by the _EndMasterParallel()
|
||||||
|
* function of the respective dump format.
|
||||||
|
*/
|
||||||
|
static char *
|
||||||
|
_WorkerJobDumpDirectory(ArchiveHandle *AH, TocEntry *te)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* short fixed-size string + some ID so far, this needs to be malloc'ed
|
||||||
|
* instead of static because we work with threads on windows
|
||||||
|
*/
|
||||||
|
const int buflen = 64;
|
||||||
|
char *buf = (char *) pg_malloc(buflen);
|
||||||
|
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
|
||||||
|
|
||||||
|
/* This should never happen */
|
||||||
|
if (!tctx)
|
||||||
|
exit_horribly(modulename, "Error during backup\n");
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function returns void. We either fail and die horribly or
|
||||||
|
* succeed... A failure will be detected by the parent when the child dies
|
||||||
|
* unexpectedly.
|
||||||
|
*/
|
||||||
|
WriteDataChunksForTocEntry(AH, te);
|
||||||
|
|
||||||
|
snprintf(buf, buflen, "OK DUMP %d", te->dumpId);
|
||||||
|
|
||||||
|
return buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the child of a parallel backup for the
|
||||||
|
* directory archive and dumps the actual data.
|
||||||
|
*/
|
||||||
|
static char *
|
||||||
|
_WorkerJobRestoreDirectory(ArchiveHandle *AH, TocEntry *te)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* short fixed-size string + some ID so far, this needs to be malloc'ed
|
||||||
|
* instead of static because we work with threads on windows
|
||||||
|
*/
|
||||||
|
const int buflen = 64;
|
||||||
|
char *buf = (char *) pg_malloc(buflen);
|
||||||
|
ParallelArgs pargs;
|
||||||
|
int status;
|
||||||
|
|
||||||
|
pargs.AH = AH;
|
||||||
|
pargs.te = te;
|
||||||
|
|
||||||
|
status = parallel_restore(&pargs);
|
||||||
|
|
||||||
|
snprintf(buf, buflen, "OK RESTORE %d %d %d", te->dumpId, status,
|
||||||
|
status == WORKER_IGNORED_ERRORS ? AH->public.n_errors : 0);
|
||||||
|
|
||||||
|
return buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is executed in the parent process. It analyzes the response of
|
||||||
|
* the _WorkerJobDumpDirectory/_WorkerJobRestoreDirectory functions of the
|
||||||
|
* respective dump format.
|
||||||
|
*/
|
||||||
|
static int
|
||||||
|
_MasterEndParallelItem(ArchiveHandle *AH, TocEntry *te, const char *str, T_Action act)
|
||||||
|
{
|
||||||
|
DumpId dumpId;
|
||||||
|
int nBytes,
|
||||||
|
n_errors;
|
||||||
|
int status = 0;
|
||||||
|
|
||||||
|
if (act == ACT_DUMP)
|
||||||
|
{
|
||||||
|
sscanf(str, "%u%n", &dumpId, &nBytes);
|
||||||
|
|
||||||
|
Assert(dumpId == te->dumpId);
|
||||||
|
Assert(nBytes == strlen(str));
|
||||||
|
}
|
||||||
|
else if (act == ACT_RESTORE)
|
||||||
|
{
|
||||||
|
sscanf(str, "%u %u %u%n", &dumpId, &status, &n_errors, &nBytes);
|
||||||
|
|
||||||
|
Assert(dumpId == te->dumpId);
|
||||||
|
Assert(nBytes == strlen(str));
|
||||||
|
|
||||||
|
AH->public.n_errors += n_errors;
|
||||||
|
}
|
||||||
|
|
||||||
|
return status;
|
||||||
|
}
|
||||||
|
@ -158,6 +158,12 @@ InitArchiveFmt_Tar(ArchiveHandle *AH)
|
|||||||
AH->ClonePtr = NULL;
|
AH->ClonePtr = NULL;
|
||||||
AH->DeClonePtr = NULL;
|
AH->DeClonePtr = NULL;
|
||||||
|
|
||||||
|
AH->MasterStartParallelItemPtr = NULL;
|
||||||
|
AH->MasterEndParallelItemPtr = NULL;
|
||||||
|
|
||||||
|
AH->WorkerJobDumpPtr = NULL;
|
||||||
|
AH->WorkerJobRestorePtr = NULL;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Set up some special context used in compressing data.
|
* Set up some special context used in compressing data.
|
||||||
*/
|
*/
|
||||||
@ -828,7 +834,7 @@ _CloseArchive(ArchiveHandle *AH)
|
|||||||
/*
|
/*
|
||||||
* Now send the data (tables & blobs)
|
* Now send the data (tables & blobs)
|
||||||
*/
|
*/
|
||||||
WriteDataChunks(AH);
|
WriteDataChunks(AH, NULL);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Now this format wants to append a script which does a full restore
|
* Now this format wants to append a script which does a full restore
|
||||||
|
@ -135,6 +135,7 @@ static int disable_dollar_quoting = 0;
|
|||||||
static int dump_inserts = 0;
|
static int dump_inserts = 0;
|
||||||
static int column_inserts = 0;
|
static int column_inserts = 0;
|
||||||
static int no_security_labels = 0;
|
static int no_security_labels = 0;
|
||||||
|
static int no_synchronized_snapshots = 0;
|
||||||
static int no_unlogged_table_data = 0;
|
static int no_unlogged_table_data = 0;
|
||||||
static int serializable_deferrable = 0;
|
static int serializable_deferrable = 0;
|
||||||
|
|
||||||
@ -243,8 +244,6 @@ static Oid findLastBuiltinOid_V70(Archive *fout);
|
|||||||
static void selectSourceSchema(Archive *fout, const char *schemaName);
|
static void selectSourceSchema(Archive *fout, const char *schemaName);
|
||||||
static char *getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts);
|
static char *getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts);
|
||||||
static char *myFormatType(const char *typname, int32 typmod);
|
static char *myFormatType(const char *typname, int32 typmod);
|
||||||
static const char *fmtQualifiedId(Archive *fout,
|
|
||||||
const char *schema, const char *id);
|
|
||||||
static void getBlobs(Archive *fout);
|
static void getBlobs(Archive *fout);
|
||||||
static void dumpBlob(Archive *fout, BlobInfo *binfo);
|
static void dumpBlob(Archive *fout, BlobInfo *binfo);
|
||||||
static int dumpBlobs(Archive *fout, void *arg);
|
static int dumpBlobs(Archive *fout, void *arg);
|
||||||
@ -262,8 +261,10 @@ static void binary_upgrade_extension_member(PQExpBuffer upgrade_buffer,
|
|||||||
DumpableObject *dobj,
|
DumpableObject *dobj,
|
||||||
const char *objlabel);
|
const char *objlabel);
|
||||||
static const char *getAttrName(int attrnum, TableInfo *tblInfo);
|
static const char *getAttrName(int attrnum, TableInfo *tblInfo);
|
||||||
static const char *fmtCopyColumnList(const TableInfo *ti);
|
static const char *fmtCopyColumnList(const TableInfo *ti, PQExpBuffer buffer);
|
||||||
|
static char *get_synchronized_snapshot(Archive *fout);
|
||||||
static PGresult *ExecuteSqlQueryForSingleRow(Archive *fout, char *query);
|
static PGresult *ExecuteSqlQueryForSingleRow(Archive *fout, char *query);
|
||||||
|
static void setupDumpWorker(Archive *AHX, RestoreOptions *ropt);
|
||||||
|
|
||||||
|
|
||||||
int
|
int
|
||||||
@ -284,6 +285,7 @@ main(int argc, char **argv)
|
|||||||
int numObjs;
|
int numObjs;
|
||||||
DumpableObject *boundaryObjs;
|
DumpableObject *boundaryObjs;
|
||||||
int i;
|
int i;
|
||||||
|
int numWorkers = 1;
|
||||||
enum trivalue prompt_password = TRI_DEFAULT;
|
enum trivalue prompt_password = TRI_DEFAULT;
|
||||||
int compressLevel = -1;
|
int compressLevel = -1;
|
||||||
int plainText = 0;
|
int plainText = 0;
|
||||||
@ -314,6 +316,7 @@ main(int argc, char **argv)
|
|||||||
{"format", required_argument, NULL, 'F'},
|
{"format", required_argument, NULL, 'F'},
|
||||||
{"host", required_argument, NULL, 'h'},
|
{"host", required_argument, NULL, 'h'},
|
||||||
{"ignore-version", no_argument, NULL, 'i'},
|
{"ignore-version", no_argument, NULL, 'i'},
|
||||||
|
{"jobs", 1, NULL, 'j'},
|
||||||
{"no-reconnect", no_argument, NULL, 'R'},
|
{"no-reconnect", no_argument, NULL, 'R'},
|
||||||
{"oids", no_argument, NULL, 'o'},
|
{"oids", no_argument, NULL, 'o'},
|
||||||
{"no-owner", no_argument, NULL, 'O'},
|
{"no-owner", no_argument, NULL, 'O'},
|
||||||
@ -353,6 +356,7 @@ main(int argc, char **argv)
|
|||||||
{"serializable-deferrable", no_argument, &serializable_deferrable, 1},
|
{"serializable-deferrable", no_argument, &serializable_deferrable, 1},
|
||||||
{"use-set-session-authorization", no_argument, &use_setsessauth, 1},
|
{"use-set-session-authorization", no_argument, &use_setsessauth, 1},
|
||||||
{"no-security-labels", no_argument, &no_security_labels, 1},
|
{"no-security-labels", no_argument, &no_security_labels, 1},
|
||||||
|
{"no-synchronized-snapshots", no_argument, &no_synchronized_snapshots, 1},
|
||||||
{"no-unlogged-table-data", no_argument, &no_unlogged_table_data, 1},
|
{"no-unlogged-table-data", no_argument, &no_unlogged_table_data, 1},
|
||||||
|
|
||||||
{NULL, 0, NULL, 0}
|
{NULL, 0, NULL, 0}
|
||||||
@ -360,6 +364,12 @@ main(int argc, char **argv)
|
|||||||
|
|
||||||
set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_dump"));
|
set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_dump"));
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Initialize what we need for parallel execution, especially for thread
|
||||||
|
* support on Windows.
|
||||||
|
*/
|
||||||
|
init_parallel_dump_utils();
|
||||||
|
|
||||||
g_verbose = false;
|
g_verbose = false;
|
||||||
|
|
||||||
strcpy(g_comment_start, "-- ");
|
strcpy(g_comment_start, "-- ");
|
||||||
@ -390,7 +400,7 @@ main(int argc, char **argv)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
while ((c = getopt_long(argc, argv, "abcCd:E:f:F:h:iK:n:N:oOp:RsS:t:T:U:vwWxZ:",
|
while ((c = getopt_long(argc, argv, "abcCd:E:f:F:h:ij:K:n:N:oOp:RsS:t:T:U:vwWxZ:",
|
||||||
long_options, &optindex)) != -1)
|
long_options, &optindex)) != -1)
|
||||||
{
|
{
|
||||||
switch (c)
|
switch (c)
|
||||||
@ -435,6 +445,10 @@ main(int argc, char **argv)
|
|||||||
/* ignored, deprecated option */
|
/* ignored, deprecated option */
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
case 'j': /* number of dump jobs */
|
||||||
|
numWorkers = atoi(optarg);
|
||||||
|
break;
|
||||||
|
|
||||||
case 'n': /* include schema(s) */
|
case 'n': /* include schema(s) */
|
||||||
simple_string_list_append(&schema_include_patterns, optarg);
|
simple_string_list_append(&schema_include_patterns, optarg);
|
||||||
include_everything = false;
|
include_everything = false;
|
||||||
@ -577,8 +591,25 @@ main(int argc, char **argv)
|
|||||||
compressLevel = 0;
|
compressLevel = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* On Windows we can only have at most MAXIMUM_WAIT_OBJECTS (= 64 usually)
|
||||||
|
* parallel jobs because that's the maximum limit for the
|
||||||
|
* WaitForMultipleObjects() call.
|
||||||
|
*/
|
||||||
|
if (numWorkers <= 0
|
||||||
|
#ifdef WIN32
|
||||||
|
|| numWorkers > MAXIMUM_WAIT_OBJECTS
|
||||||
|
#endif
|
||||||
|
)
|
||||||
|
exit_horribly(NULL, "%s: invalid number of parallel jobs\n", progname);
|
||||||
|
|
||||||
|
/* Parallel backup only in the directory archive format so far */
|
||||||
|
if (archiveFormat != archDirectory && numWorkers > 1)
|
||||||
|
exit_horribly(NULL, "parallel backup only supported by the directory format\n");
|
||||||
|
|
||||||
/* Open the output file */
|
/* Open the output file */
|
||||||
fout = CreateArchive(filename, archiveFormat, compressLevel, archiveMode);
|
fout = CreateArchive(filename, archiveFormat, compressLevel, archiveMode,
|
||||||
|
setupDumpWorker);
|
||||||
|
|
||||||
/* Register the cleanup hook */
|
/* Register the cleanup hook */
|
||||||
on_exit_close_archive(fout);
|
on_exit_close_archive(fout);
|
||||||
@ -600,6 +631,8 @@ main(int argc, char **argv)
|
|||||||
fout->minRemoteVersion = 70000;
|
fout->minRemoteVersion = 70000;
|
||||||
fout->maxRemoteVersion = (my_version / 100) * 100 + 99;
|
fout->maxRemoteVersion = (my_version / 100) * 100 + 99;
|
||||||
|
|
||||||
|
fout->numWorkers = numWorkers;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Open the database using the Archiver, so it knows about it. Errors mean
|
* Open the database using the Archiver, so it knows about it. Errors mean
|
||||||
* death.
|
* death.
|
||||||
@ -621,6 +654,7 @@ main(int argc, char **argv)
|
|||||||
if (fout->remoteVersion >= 90000)
|
if (fout->remoteVersion >= 90000)
|
||||||
{
|
{
|
||||||
PGresult *res = ExecuteSqlQueryForSingleRow(fout, "SELECT pg_catalog.pg_is_in_recovery()");
|
PGresult *res = ExecuteSqlQueryForSingleRow(fout, "SELECT pg_catalog.pg_is_in_recovery()");
|
||||||
|
|
||||||
if (strcmp(PQgetvalue(res, 0, 0), "t") == 0)
|
if (strcmp(PQgetvalue(res, 0, 0), "t") == 0)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
@ -632,32 +666,6 @@ main(int argc, char **argv)
|
|||||||
PQclear(res);
|
PQclear(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Start transaction-snapshot mode transaction to dump consistent data.
|
|
||||||
*/
|
|
||||||
ExecuteSqlStatement(fout, "BEGIN");
|
|
||||||
if (fout->remoteVersion >= 90100)
|
|
||||||
{
|
|
||||||
if (serializable_deferrable)
|
|
||||||
ExecuteSqlStatement(fout,
|
|
||||||
"SET TRANSACTION ISOLATION LEVEL "
|
|
||||||
"SERIALIZABLE, READ ONLY, DEFERRABLE");
|
|
||||||
else
|
|
||||||
ExecuteSqlStatement(fout,
|
|
||||||
"SET TRANSACTION ISOLATION LEVEL "
|
|
||||||
"REPEATABLE READ, READ ONLY");
|
|
||||||
}
|
|
||||||
else if (fout->remoteVersion >= 70400)
|
|
||||||
{
|
|
||||||
/* note: comma was not accepted in SET TRANSACTION before 8.0 */
|
|
||||||
ExecuteSqlStatement(fout,
|
|
||||||
"SET TRANSACTION ISOLATION LEVEL "
|
|
||||||
"SERIALIZABLE READ ONLY");
|
|
||||||
}
|
|
||||||
else
|
|
||||||
ExecuteSqlStatement(fout,
|
|
||||||
"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
|
|
||||||
|
|
||||||
/* Select the appropriate subquery to convert user IDs to names */
|
/* Select the appropriate subquery to convert user IDs to names */
|
||||||
if (fout->remoteVersion >= 80100)
|
if (fout->remoteVersion >= 80100)
|
||||||
username_subquery = "SELECT rolname FROM pg_catalog.pg_roles WHERE oid =";
|
username_subquery = "SELECT rolname FROM pg_catalog.pg_roles WHERE oid =";
|
||||||
@ -666,6 +674,14 @@ main(int argc, char **argv)
|
|||||||
else
|
else
|
||||||
username_subquery = "SELECT usename FROM pg_user WHERE usesysid =";
|
username_subquery = "SELECT usename FROM pg_user WHERE usesysid =";
|
||||||
|
|
||||||
|
/* check the version for the synchronized snapshots feature */
|
||||||
|
if (numWorkers > 1 && fout->remoteVersion < 90200
|
||||||
|
&& !no_synchronized_snapshots)
|
||||||
|
exit_horribly(NULL,
|
||||||
|
"No synchronized snapshots available in this server version.\n"
|
||||||
|
"Run with --no-synchronized-snapshots instead if you do not\n"
|
||||||
|
"need synchronized snapshots.\n");
|
||||||
|
|
||||||
/* Find the last built-in OID, if needed */
|
/* Find the last built-in OID, if needed */
|
||||||
if (fout->remoteVersion < 70300)
|
if (fout->remoteVersion < 70300)
|
||||||
{
|
{
|
||||||
@ -763,6 +779,10 @@ main(int argc, char **argv)
|
|||||||
else
|
else
|
||||||
sortDumpableObjectsByTypeOid(dobjs, numObjs);
|
sortDumpableObjectsByTypeOid(dobjs, numObjs);
|
||||||
|
|
||||||
|
/* If we do a parallel dump, we want the largest tables to go first */
|
||||||
|
if (archiveFormat == archDirectory && numWorkers > 1)
|
||||||
|
sortDataAndIndexObjectsBySize(dobjs, numObjs);
|
||||||
|
|
||||||
sortDumpableObjects(dobjs, numObjs,
|
sortDumpableObjects(dobjs, numObjs,
|
||||||
boundaryObjs[0].dumpId, boundaryObjs[1].dumpId);
|
boundaryObjs[0].dumpId, boundaryObjs[1].dumpId);
|
||||||
|
|
||||||
@ -810,9 +830,9 @@ main(int argc, char **argv)
|
|||||||
SetArchiveRestoreOptions(fout, ropt);
|
SetArchiveRestoreOptions(fout, ropt);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The archive's TOC entries are now marked as to which ones will
|
* The archive's TOC entries are now marked as to which ones will actually
|
||||||
* actually be output, so we can set up their dependency lists properly.
|
* be output, so we can set up their dependency lists properly. This isn't
|
||||||
* This isn't necessary for plain-text output, though.
|
* necessary for plain-text output, though.
|
||||||
*/
|
*/
|
||||||
if (!plainText)
|
if (!plainText)
|
||||||
BuildArchiveDependencies(fout);
|
BuildArchiveDependencies(fout);
|
||||||
@ -844,6 +864,7 @@ help(const char *progname)
|
|||||||
printf(_(" -f, --file=FILENAME output file or directory name\n"));
|
printf(_(" -f, --file=FILENAME output file or directory name\n"));
|
||||||
printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
|
printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
|
||||||
" plain text (default))\n"));
|
" plain text (default))\n"));
|
||||||
|
printf(_(" -j, --jobs=NUM use this many parallel jobs to dump\n"));
|
||||||
printf(_(" -v, --verbose verbose mode\n"));
|
printf(_(" -v, --verbose verbose mode\n"));
|
||||||
printf(_(" -V, --version output version information, then exit\n"));
|
printf(_(" -V, --version output version information, then exit\n"));
|
||||||
printf(_(" -Z, --compress=0-9 compression level for compressed formats\n"));
|
printf(_(" -Z, --compress=0-9 compression level for compressed formats\n"));
|
||||||
@ -873,6 +894,7 @@ help(const char *progname)
|
|||||||
printf(_(" --exclude-table-data=TABLE do NOT dump data for the named table(s)\n"));
|
printf(_(" --exclude-table-data=TABLE do NOT dump data for the named table(s)\n"));
|
||||||
printf(_(" --inserts dump data as INSERT commands, rather than COPY\n"));
|
printf(_(" --inserts dump data as INSERT commands, rather than COPY\n"));
|
||||||
printf(_(" --no-security-labels do not dump security label assignments\n"));
|
printf(_(" --no-security-labels do not dump security label assignments\n"));
|
||||||
|
printf(_(" --no-synchronized-snapshots parallel processes should not use synchronized snapshots\n"));
|
||||||
printf(_(" --no-tablespaces do not dump tablespace assignments\n"));
|
printf(_(" --no-tablespaces do not dump tablespace assignments\n"));
|
||||||
printf(_(" --no-unlogged-table-data do not dump unlogged table data\n"));
|
printf(_(" --no-unlogged-table-data do not dump unlogged table data\n"));
|
||||||
printf(_(" --quote-all-identifiers quote all identifiers, even if not key words\n"));
|
printf(_(" --quote-all-identifiers quote all identifiers, even if not key words\n"));
|
||||||
@ -902,7 +924,12 @@ setup_connection(Archive *AH, const char *dumpencoding, char *use_role)
|
|||||||
PGconn *conn = GetConnection(AH);
|
PGconn *conn = GetConnection(AH);
|
||||||
const char *std_strings;
|
const char *std_strings;
|
||||||
|
|
||||||
/* Set the client encoding if requested */
|
/*
|
||||||
|
* Set the client encoding if requested. If dumpencoding == NULL then
|
||||||
|
* either it hasn't been requested or we're a cloned connection and then
|
||||||
|
* this has already been set in CloneArchive according to the original
|
||||||
|
* connection encoding.
|
||||||
|
*/
|
||||||
if (dumpencoding)
|
if (dumpencoding)
|
||||||
{
|
{
|
||||||
if (PQsetClientEncoding(conn, dumpencoding) < 0)
|
if (PQsetClientEncoding(conn, dumpencoding) < 0)
|
||||||
@ -919,6 +946,10 @@ setup_connection(Archive *AH, const char *dumpencoding, char *use_role)
|
|||||||
std_strings = PQparameterStatus(conn, "standard_conforming_strings");
|
std_strings = PQparameterStatus(conn, "standard_conforming_strings");
|
||||||
AH->std_strings = (std_strings && strcmp(std_strings, "on") == 0);
|
AH->std_strings = (std_strings && strcmp(std_strings, "on") == 0);
|
||||||
|
|
||||||
|
/* Set the role if requested */
|
||||||
|
if (!use_role && AH->use_role)
|
||||||
|
use_role = AH->use_role;
|
||||||
|
|
||||||
/* Set the role if requested */
|
/* Set the role if requested */
|
||||||
if (use_role && AH->remoteVersion >= 80100)
|
if (use_role && AH->remoteVersion >= 80100)
|
||||||
{
|
{
|
||||||
@ -927,6 +958,10 @@ setup_connection(Archive *AH, const char *dumpencoding, char *use_role)
|
|||||||
appendPQExpBuffer(query, "SET ROLE %s", fmtId(use_role));
|
appendPQExpBuffer(query, "SET ROLE %s", fmtId(use_role));
|
||||||
ExecuteSqlStatement(AH, query->data);
|
ExecuteSqlStatement(AH, query->data);
|
||||||
destroyPQExpBuffer(query);
|
destroyPQExpBuffer(query);
|
||||||
|
|
||||||
|
/* save this for later use on parallel connections */
|
||||||
|
if (!AH->use_role)
|
||||||
|
AH->use_role = strdup(use_role);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Set the datestyle to ISO to ensure the dump's portability */
|
/* Set the datestyle to ISO to ensure the dump's portability */
|
||||||
@ -965,6 +1000,68 @@ setup_connection(Archive *AH, const char *dumpencoding, char *use_role)
|
|||||||
*/
|
*/
|
||||||
if (quote_all_identifiers && AH->remoteVersion >= 90100)
|
if (quote_all_identifiers && AH->remoteVersion >= 90100)
|
||||||
ExecuteSqlStatement(AH, "SET quote_all_identifiers = true");
|
ExecuteSqlStatement(AH, "SET quote_all_identifiers = true");
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Start transaction-snapshot mode transaction to dump consistent data.
|
||||||
|
*/
|
||||||
|
ExecuteSqlStatement(AH, "BEGIN");
|
||||||
|
if (AH->remoteVersion >= 90100)
|
||||||
|
{
|
||||||
|
if (serializable_deferrable)
|
||||||
|
ExecuteSqlStatement(AH,
|
||||||
|
"SET TRANSACTION ISOLATION LEVEL "
|
||||||
|
"SERIALIZABLE, READ ONLY, DEFERRABLE");
|
||||||
|
else
|
||||||
|
ExecuteSqlStatement(AH,
|
||||||
|
"SET TRANSACTION ISOLATION LEVEL "
|
||||||
|
"REPEATABLE READ, READ ONLY");
|
||||||
|
}
|
||||||
|
else if (AH->remoteVersion >= 70400)
|
||||||
|
{
|
||||||
|
/* note: comma was not accepted in SET TRANSACTION before 8.0 */
|
||||||
|
ExecuteSqlStatement(AH,
|
||||||
|
"SET TRANSACTION ISOLATION LEVEL "
|
||||||
|
"SERIALIZABLE READ ONLY");
|
||||||
|
}
|
||||||
|
else
|
||||||
|
ExecuteSqlStatement(AH,
|
||||||
|
"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if (AH->numWorkers > 1 && AH->remoteVersion >= 90200 && !no_synchronized_snapshots)
|
||||||
|
{
|
||||||
|
if (AH->sync_snapshot_id)
|
||||||
|
{
|
||||||
|
PQExpBuffer query = createPQExpBuffer();
|
||||||
|
|
||||||
|
appendPQExpBuffer(query, "SET TRANSACTION SNAPSHOT ");
|
||||||
|
appendStringLiteralConn(query, AH->sync_snapshot_id, conn);
|
||||||
|
destroyPQExpBuffer(query);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
AH->sync_snapshot_id = get_synchronized_snapshot(AH);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
setupDumpWorker(Archive *AHX, RestoreOptions *ropt)
|
||||||
|
{
|
||||||
|
setup_connection(AHX, NULL, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static char *
|
||||||
|
get_synchronized_snapshot(Archive *fout)
|
||||||
|
{
|
||||||
|
char *query = "SELECT pg_export_snapshot()";
|
||||||
|
char *result;
|
||||||
|
PGresult *res;
|
||||||
|
|
||||||
|
res = ExecuteSqlQueryForSingleRow(fout, query);
|
||||||
|
result = strdup(PQgetvalue(res, 0, 0));
|
||||||
|
PQclear(res);
|
||||||
|
|
||||||
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
static ArchiveFormat
|
static ArchiveFormat
|
||||||
@ -1282,6 +1379,12 @@ dumpTableData_copy(Archive *fout, void *dcontext)
|
|||||||
const bool hasoids = tbinfo->hasoids;
|
const bool hasoids = tbinfo->hasoids;
|
||||||
const bool oids = tdinfo->oids;
|
const bool oids = tdinfo->oids;
|
||||||
PQExpBuffer q = createPQExpBuffer();
|
PQExpBuffer q = createPQExpBuffer();
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Note: can't use getThreadLocalPQExpBuffer() here, we're calling fmtId
|
||||||
|
* which uses it already.
|
||||||
|
*/
|
||||||
|
PQExpBuffer clistBuf = createPQExpBuffer();
|
||||||
PGconn *conn = GetConnection(fout);
|
PGconn *conn = GetConnection(fout);
|
||||||
PGresult *res;
|
PGresult *res;
|
||||||
int ret;
|
int ret;
|
||||||
@ -1306,14 +1409,14 @@ dumpTableData_copy(Archive *fout, void *dcontext)
|
|||||||
* cases involving ADD COLUMN and inheritance.)
|
* cases involving ADD COLUMN and inheritance.)
|
||||||
*/
|
*/
|
||||||
if (fout->remoteVersion >= 70300)
|
if (fout->remoteVersion >= 70300)
|
||||||
column_list = fmtCopyColumnList(tbinfo);
|
column_list = fmtCopyColumnList(tbinfo, clistBuf);
|
||||||
else
|
else
|
||||||
column_list = ""; /* can't select columns in COPY */
|
column_list = ""; /* can't select columns in COPY */
|
||||||
|
|
||||||
if (oids && hasoids)
|
if (oids && hasoids)
|
||||||
{
|
{
|
||||||
appendPQExpBuffer(q, "COPY %s %s WITH OIDS TO stdout;",
|
appendPQExpBuffer(q, "COPY %s %s WITH OIDS TO stdout;",
|
||||||
fmtQualifiedId(fout,
|
fmtQualifiedId(fout->remoteVersion,
|
||||||
tbinfo->dobj.namespace->dobj.name,
|
tbinfo->dobj.namespace->dobj.name,
|
||||||
classname),
|
classname),
|
||||||
column_list);
|
column_list);
|
||||||
@ -1331,7 +1434,7 @@ dumpTableData_copy(Archive *fout, void *dcontext)
|
|||||||
else
|
else
|
||||||
appendPQExpBufferStr(q, "* ");
|
appendPQExpBufferStr(q, "* ");
|
||||||
appendPQExpBuffer(q, "FROM %s %s) TO stdout;",
|
appendPQExpBuffer(q, "FROM %s %s) TO stdout;",
|
||||||
fmtQualifiedId(fout,
|
fmtQualifiedId(fout->remoteVersion,
|
||||||
tbinfo->dobj.namespace->dobj.name,
|
tbinfo->dobj.namespace->dobj.name,
|
||||||
classname),
|
classname),
|
||||||
tdinfo->filtercond);
|
tdinfo->filtercond);
|
||||||
@ -1339,13 +1442,14 @@ dumpTableData_copy(Archive *fout, void *dcontext)
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
appendPQExpBuffer(q, "COPY %s %s TO stdout;",
|
appendPQExpBuffer(q, "COPY %s %s TO stdout;",
|
||||||
fmtQualifiedId(fout,
|
fmtQualifiedId(fout->remoteVersion,
|
||||||
tbinfo->dobj.namespace->dobj.name,
|
tbinfo->dobj.namespace->dobj.name,
|
||||||
classname),
|
classname),
|
||||||
column_list);
|
column_list);
|
||||||
}
|
}
|
||||||
res = ExecuteSqlQuery(fout, q->data, PGRES_COPY_OUT);
|
res = ExecuteSqlQuery(fout, q->data, PGRES_COPY_OUT);
|
||||||
PQclear(res);
|
PQclear(res);
|
||||||
|
destroyPQExpBuffer(clistBuf);
|
||||||
|
|
||||||
for (;;)
|
for (;;)
|
||||||
{
|
{
|
||||||
@ -1464,7 +1568,7 @@ dumpTableData_insert(Archive *fout, void *dcontext)
|
|||||||
{
|
{
|
||||||
appendPQExpBuffer(q, "DECLARE _pg_dump_cursor CURSOR FOR "
|
appendPQExpBuffer(q, "DECLARE _pg_dump_cursor CURSOR FOR "
|
||||||
"SELECT * FROM ONLY %s",
|
"SELECT * FROM ONLY %s",
|
||||||
fmtQualifiedId(fout,
|
fmtQualifiedId(fout->remoteVersion,
|
||||||
tbinfo->dobj.namespace->dobj.name,
|
tbinfo->dobj.namespace->dobj.name,
|
||||||
classname));
|
classname));
|
||||||
}
|
}
|
||||||
@ -1472,7 +1576,7 @@ dumpTableData_insert(Archive *fout, void *dcontext)
|
|||||||
{
|
{
|
||||||
appendPQExpBuffer(q, "DECLARE _pg_dump_cursor CURSOR FOR "
|
appendPQExpBuffer(q, "DECLARE _pg_dump_cursor CURSOR FOR "
|
||||||
"SELECT * FROM %s",
|
"SELECT * FROM %s",
|
||||||
fmtQualifiedId(fout,
|
fmtQualifiedId(fout->remoteVersion,
|
||||||
tbinfo->dobj.namespace->dobj.name,
|
tbinfo->dobj.namespace->dobj.name,
|
||||||
classname));
|
classname));
|
||||||
}
|
}
|
||||||
@ -1604,6 +1708,7 @@ dumpTableData(Archive *fout, TableDataInfo *tdinfo)
|
|||||||
{
|
{
|
||||||
TableInfo *tbinfo = tdinfo->tdtable;
|
TableInfo *tbinfo = tdinfo->tdtable;
|
||||||
PQExpBuffer copyBuf = createPQExpBuffer();
|
PQExpBuffer copyBuf = createPQExpBuffer();
|
||||||
|
PQExpBuffer clistBuf = createPQExpBuffer();
|
||||||
DataDumperPtr dumpFn;
|
DataDumperPtr dumpFn;
|
||||||
char *copyStmt;
|
char *copyStmt;
|
||||||
|
|
||||||
@ -1615,7 +1720,7 @@ dumpTableData(Archive *fout, TableDataInfo *tdinfo)
|
|||||||
appendPQExpBuffer(copyBuf, "COPY %s ",
|
appendPQExpBuffer(copyBuf, "COPY %s ",
|
||||||
fmtId(tbinfo->dobj.name));
|
fmtId(tbinfo->dobj.name));
|
||||||
appendPQExpBuffer(copyBuf, "%s %sFROM stdin;\n",
|
appendPQExpBuffer(copyBuf, "%s %sFROM stdin;\n",
|
||||||
fmtCopyColumnList(tbinfo),
|
fmtCopyColumnList(tbinfo, clistBuf),
|
||||||
(tdinfo->oids && tbinfo->hasoids) ? "WITH OIDS " : "");
|
(tdinfo->oids && tbinfo->hasoids) ? "WITH OIDS " : "");
|
||||||
copyStmt = copyBuf->data;
|
copyStmt = copyBuf->data;
|
||||||
}
|
}
|
||||||
@ -1640,6 +1745,7 @@ dumpTableData(Archive *fout, TableDataInfo *tdinfo)
|
|||||||
dumpFn, tdinfo);
|
dumpFn, tdinfo);
|
||||||
|
|
||||||
destroyPQExpBuffer(copyBuf);
|
destroyPQExpBuffer(copyBuf);
|
||||||
|
destroyPQExpBuffer(clistBuf);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -4122,6 +4228,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
int i_reloptions;
|
int i_reloptions;
|
||||||
int i_toastreloptions;
|
int i_toastreloptions;
|
||||||
int i_reloftype;
|
int i_reloftype;
|
||||||
|
int i_relpages;
|
||||||
|
|
||||||
/* Make sure we are in proper schema */
|
/* Make sure we are in proper schema */
|
||||||
selectSourceSchema(fout, "pg_catalog");
|
selectSourceSchema(fout, "pg_catalog");
|
||||||
@ -4161,6 +4268,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"c.relfrozenxid, tc.oid AS toid, "
|
"c.relfrozenxid, tc.oid AS toid, "
|
||||||
"tc.relfrozenxid AS tfrozenxid, "
|
"tc.relfrozenxid AS tfrozenxid, "
|
||||||
"c.relpersistence, pg_relation_is_scannable(c.oid) as isscannable, "
|
"c.relpersistence, pg_relation_is_scannable(c.oid) as isscannable, "
|
||||||
|
"c.relpages, "
|
||||||
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
|
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
|
||||||
"d.refobjid AS owning_tab, "
|
"d.refobjid AS owning_tab, "
|
||||||
"d.refobjsubid AS owning_col, "
|
"d.refobjsubid AS owning_col, "
|
||||||
@ -4233,6 +4341,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"c.relfrozenxid, tc.oid AS toid, "
|
"c.relfrozenxid, tc.oid AS toid, "
|
||||||
"tc.relfrozenxid AS tfrozenxid, "
|
"tc.relfrozenxid AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"c.relpages, "
|
||||||
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
|
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
|
||||||
"d.refobjid AS owning_tab, "
|
"d.refobjid AS owning_tab, "
|
||||||
"d.refobjsubid AS owning_col, "
|
"d.refobjsubid AS owning_col, "
|
||||||
@ -4268,6 +4377,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"c.relfrozenxid, tc.oid AS toid, "
|
"c.relfrozenxid, tc.oid AS toid, "
|
||||||
"tc.relfrozenxid AS tfrozenxid, "
|
"tc.relfrozenxid AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"c.relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"d.refobjid AS owning_tab, "
|
"d.refobjid AS owning_tab, "
|
||||||
"d.refobjsubid AS owning_col, "
|
"d.refobjsubid AS owning_col, "
|
||||||
@ -4303,6 +4413,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"c.relfrozenxid, tc.oid AS toid, "
|
"c.relfrozenxid, tc.oid AS toid, "
|
||||||
"tc.relfrozenxid AS tfrozenxid, "
|
"tc.relfrozenxid AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"c.relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"d.refobjid AS owning_tab, "
|
"d.refobjid AS owning_tab, "
|
||||||
"d.refobjsubid AS owning_col, "
|
"d.refobjsubid AS owning_col, "
|
||||||
@ -4339,6 +4450,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"0 AS toid, "
|
"0 AS toid, "
|
||||||
"0 AS tfrozenxid, "
|
"0 AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"d.refobjid AS owning_tab, "
|
"d.refobjid AS owning_tab, "
|
||||||
"d.refobjsubid AS owning_col, "
|
"d.refobjsubid AS owning_col, "
|
||||||
@ -4374,6 +4486,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"0 AS toid, "
|
"0 AS toid, "
|
||||||
"0 AS tfrozenxid, "
|
"0 AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"d.refobjid AS owning_tab, "
|
"d.refobjid AS owning_tab, "
|
||||||
"d.refobjsubid AS owning_col, "
|
"d.refobjsubid AS owning_col, "
|
||||||
@ -4405,6 +4518,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"0 AS toid, "
|
"0 AS toid, "
|
||||||
"0 AS tfrozenxid, "
|
"0 AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"NULL::oid AS owning_tab, "
|
"NULL::oid AS owning_tab, "
|
||||||
"NULL::int4 AS owning_col, "
|
"NULL::int4 AS owning_col, "
|
||||||
@ -4431,6 +4545,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"0 AS toid, "
|
"0 AS toid, "
|
||||||
"0 AS tfrozenxid, "
|
"0 AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"NULL::oid AS owning_tab, "
|
"NULL::oid AS owning_tab, "
|
||||||
"NULL::int4 AS owning_col, "
|
"NULL::int4 AS owning_col, "
|
||||||
@ -4467,6 +4582,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
"0 AS toid, "
|
"0 AS toid, "
|
||||||
"0 AS tfrozenxid, "
|
"0 AS tfrozenxid, "
|
||||||
"'p' AS relpersistence, 't'::bool as isscannable, "
|
"'p' AS relpersistence, 't'::bool as isscannable, "
|
||||||
|
"0 AS relpages, "
|
||||||
"NULL AS reloftype, "
|
"NULL AS reloftype, "
|
||||||
"NULL::oid AS owning_tab, "
|
"NULL::oid AS owning_tab, "
|
||||||
"NULL::int4 AS owning_col, "
|
"NULL::int4 AS owning_col, "
|
||||||
@ -4515,6 +4631,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
i_toastfrozenxid = PQfnumber(res, "tfrozenxid");
|
i_toastfrozenxid = PQfnumber(res, "tfrozenxid");
|
||||||
i_relpersistence = PQfnumber(res, "relpersistence");
|
i_relpersistence = PQfnumber(res, "relpersistence");
|
||||||
i_isscannable = PQfnumber(res, "isscannable");
|
i_isscannable = PQfnumber(res, "isscannable");
|
||||||
|
i_relpages = PQfnumber(res, "relpages");
|
||||||
i_owning_tab = PQfnumber(res, "owning_tab");
|
i_owning_tab = PQfnumber(res, "owning_tab");
|
||||||
i_owning_col = PQfnumber(res, "owning_col");
|
i_owning_col = PQfnumber(res, "owning_col");
|
||||||
i_reltablespace = PQfnumber(res, "reltablespace");
|
i_reltablespace = PQfnumber(res, "reltablespace");
|
||||||
@ -4557,6 +4674,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
tblinfo[i].hastriggers = (strcmp(PQgetvalue(res, i, i_relhastriggers), "t") == 0);
|
tblinfo[i].hastriggers = (strcmp(PQgetvalue(res, i, i_relhastriggers), "t") == 0);
|
||||||
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
|
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
|
||||||
tblinfo[i].isscannable = (strcmp(PQgetvalue(res, i, i_isscannable), "t") == 0);
|
tblinfo[i].isscannable = (strcmp(PQgetvalue(res, i, i_isscannable), "t") == 0);
|
||||||
|
tblinfo[i].relpages = atoi(PQgetvalue(res, i, i_relpages));
|
||||||
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
|
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
|
||||||
tblinfo[i].toast_oid = atooid(PQgetvalue(res, i, i_toastoid));
|
tblinfo[i].toast_oid = atooid(PQgetvalue(res, i, i_toastoid));
|
||||||
tblinfo[i].toast_frozenxid = atooid(PQgetvalue(res, i, i_toastfrozenxid));
|
tblinfo[i].toast_frozenxid = atooid(PQgetvalue(res, i, i_toastfrozenxid));
|
||||||
@ -4606,7 +4724,7 @@ getTables(Archive *fout, int *numTables)
|
|||||||
resetPQExpBuffer(query);
|
resetPQExpBuffer(query);
|
||||||
appendPQExpBuffer(query,
|
appendPQExpBuffer(query,
|
||||||
"LOCK TABLE %s IN ACCESS SHARE MODE",
|
"LOCK TABLE %s IN ACCESS SHARE MODE",
|
||||||
fmtQualifiedId(fout,
|
fmtQualifiedId(fout->remoteVersion,
|
||||||
tblinfo[i].dobj.namespace->dobj.name,
|
tblinfo[i].dobj.namespace->dobj.name,
|
||||||
tblinfo[i].dobj.name));
|
tblinfo[i].dobj.name));
|
||||||
ExecuteSqlStatement(fout, query->data);
|
ExecuteSqlStatement(fout, query->data);
|
||||||
@ -4745,7 +4863,8 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
i_conoid,
|
i_conoid,
|
||||||
i_condef,
|
i_condef,
|
||||||
i_tablespace,
|
i_tablespace,
|
||||||
i_options;
|
i_options,
|
||||||
|
i_relpages;
|
||||||
int ntups;
|
int ntups;
|
||||||
|
|
||||||
for (i = 0; i < numTables; i++)
|
for (i = 0; i < numTables; i++)
|
||||||
@ -4790,6 +4909,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
||||||
"t.relnatts AS indnkeys, "
|
"t.relnatts AS indnkeys, "
|
||||||
"i.indkey, i.indisclustered, "
|
"i.indkey, i.indisclustered, "
|
||||||
|
"t.relpages, "
|
||||||
"c.contype, c.conname, "
|
"c.contype, c.conname, "
|
||||||
"c.condeferrable, c.condeferred, "
|
"c.condeferrable, c.condeferred, "
|
||||||
"c.tableoid AS contableoid, "
|
"c.tableoid AS contableoid, "
|
||||||
@ -4815,6 +4935,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
||||||
"t.relnatts AS indnkeys, "
|
"t.relnatts AS indnkeys, "
|
||||||
"i.indkey, i.indisclustered, "
|
"i.indkey, i.indisclustered, "
|
||||||
|
"t.relpages, "
|
||||||
"c.contype, c.conname, "
|
"c.contype, c.conname, "
|
||||||
"c.condeferrable, c.condeferred, "
|
"c.condeferrable, c.condeferred, "
|
||||||
"c.tableoid AS contableoid, "
|
"c.tableoid AS contableoid, "
|
||||||
@ -4843,6 +4964,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
||||||
"t.relnatts AS indnkeys, "
|
"t.relnatts AS indnkeys, "
|
||||||
"i.indkey, i.indisclustered, "
|
"i.indkey, i.indisclustered, "
|
||||||
|
"t.relpages, "
|
||||||
"c.contype, c.conname, "
|
"c.contype, c.conname, "
|
||||||
"c.condeferrable, c.condeferred, "
|
"c.condeferrable, c.condeferred, "
|
||||||
"c.tableoid AS contableoid, "
|
"c.tableoid AS contableoid, "
|
||||||
@ -4871,6 +4993,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
"pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
|
||||||
"t.relnatts AS indnkeys, "
|
"t.relnatts AS indnkeys, "
|
||||||
"i.indkey, i.indisclustered, "
|
"i.indkey, i.indisclustered, "
|
||||||
|
"t.relpages, "
|
||||||
"c.contype, c.conname, "
|
"c.contype, c.conname, "
|
||||||
"c.condeferrable, c.condeferred, "
|
"c.condeferrable, c.condeferred, "
|
||||||
"c.tableoid AS contableoid, "
|
"c.tableoid AS contableoid, "
|
||||||
@ -4899,6 +5022,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
"pg_get_indexdef(i.indexrelid) AS indexdef, "
|
"pg_get_indexdef(i.indexrelid) AS indexdef, "
|
||||||
"t.relnatts AS indnkeys, "
|
"t.relnatts AS indnkeys, "
|
||||||
"i.indkey, false AS indisclustered, "
|
"i.indkey, false AS indisclustered, "
|
||||||
|
"t.relpages, "
|
||||||
"CASE WHEN i.indisprimary THEN 'p'::char "
|
"CASE WHEN i.indisprimary THEN 'p'::char "
|
||||||
"ELSE '0'::char END AS contype, "
|
"ELSE '0'::char END AS contype, "
|
||||||
"t.relname AS conname, "
|
"t.relname AS conname, "
|
||||||
@ -4925,6 +5049,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
"pg_get_indexdef(i.indexrelid) AS indexdef, "
|
"pg_get_indexdef(i.indexrelid) AS indexdef, "
|
||||||
"t.relnatts AS indnkeys, "
|
"t.relnatts AS indnkeys, "
|
||||||
"i.indkey, false AS indisclustered, "
|
"i.indkey, false AS indisclustered, "
|
||||||
|
"t.relpages, "
|
||||||
"CASE WHEN i.indisprimary THEN 'p'::char "
|
"CASE WHEN i.indisprimary THEN 'p'::char "
|
||||||
"ELSE '0'::char END AS contype, "
|
"ELSE '0'::char END AS contype, "
|
||||||
"t.relname AS conname, "
|
"t.relname AS conname, "
|
||||||
@ -4953,6 +5078,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
i_indnkeys = PQfnumber(res, "indnkeys");
|
i_indnkeys = PQfnumber(res, "indnkeys");
|
||||||
i_indkey = PQfnumber(res, "indkey");
|
i_indkey = PQfnumber(res, "indkey");
|
||||||
i_indisclustered = PQfnumber(res, "indisclustered");
|
i_indisclustered = PQfnumber(res, "indisclustered");
|
||||||
|
i_relpages = PQfnumber(res, "relpages");
|
||||||
i_contype = PQfnumber(res, "contype");
|
i_contype = PQfnumber(res, "contype");
|
||||||
i_conname = PQfnumber(res, "conname");
|
i_conname = PQfnumber(res, "conname");
|
||||||
i_condeferrable = PQfnumber(res, "condeferrable");
|
i_condeferrable = PQfnumber(res, "condeferrable");
|
||||||
@ -4995,6 +5121,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
|
|||||||
parseOidArray(PQgetvalue(res, j, i_indkey),
|
parseOidArray(PQgetvalue(res, j, i_indkey),
|
||||||
indxinfo[j].indkeys, INDEX_MAX_KEYS);
|
indxinfo[j].indkeys, INDEX_MAX_KEYS);
|
||||||
indxinfo[j].indisclustered = (PQgetvalue(res, j, i_indisclustered)[0] == 't');
|
indxinfo[j].indisclustered = (PQgetvalue(res, j, i_indisclustered)[0] == 't');
|
||||||
|
indxinfo[j].relpages = atoi(PQgetvalue(res, j, i_relpages));
|
||||||
contype = *(PQgetvalue(res, j, i_contype));
|
contype = *(PQgetvalue(res, j, i_contype));
|
||||||
|
|
||||||
if (contype == 'p' || contype == 'u' || contype == 'x')
|
if (contype == 'p' || contype == 'u' || contype == 'x')
|
||||||
@ -12641,7 +12768,7 @@ createViewAsClause(Archive *fout, TableInfo *tbinfo)
|
|||||||
tbinfo->dobj.name);
|
tbinfo->dobj.name);
|
||||||
|
|
||||||
/* Strip off the trailing semicolon so that other things may follow. */
|
/* Strip off the trailing semicolon so that other things may follow. */
|
||||||
Assert(PQgetvalue(res, 0, 0)[len-1] == ';');
|
Assert(PQgetvalue(res, 0, 0)[len - 1] == ';');
|
||||||
appendBinaryPQExpBuffer(result, PQgetvalue(res, 0, 0), len - 1);
|
appendBinaryPQExpBuffer(result, PQgetvalue(res, 0, 0), len - 1);
|
||||||
|
|
||||||
PQclear(res);
|
PQclear(res);
|
||||||
@ -12793,9 +12920,10 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo)
|
|||||||
for (j = 0; j < tbinfo->numatts; j++)
|
for (j = 0; j < tbinfo->numatts; j++)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* Normally, dump if it's locally defined in this table, and not
|
* Normally, dump if it's locally defined in this table, and
|
||||||
* dropped. But for binary upgrade, we'll dump all the columns,
|
* not dropped. But for binary upgrade, we'll dump all the
|
||||||
* and then fix up the dropped and nonlocal cases below.
|
* columns, and then fix up the dropped and nonlocal cases
|
||||||
|
* below.
|
||||||
*/
|
*/
|
||||||
if (shouldPrintColumn(tbinfo, j))
|
if (shouldPrintColumn(tbinfo, j))
|
||||||
{
|
{
|
||||||
@ -12806,8 +12934,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo)
|
|||||||
!tbinfo->attrdefs[j]->separate);
|
!tbinfo->attrdefs[j]->separate);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Not Null constraint --- suppress if inherited, except in
|
* Not Null constraint --- suppress if inherited, except
|
||||||
* binary-upgrade case where that won't work.
|
* in binary-upgrade case where that won't work.
|
||||||
*/
|
*/
|
||||||
bool has_notnull = (tbinfo->notnull[j] &&
|
bool has_notnull = (tbinfo->notnull[j] &&
|
||||||
(!tbinfo->inhNotNull[j] ||
|
(!tbinfo->inhNotNull[j] ||
|
||||||
@ -12833,9 +12961,10 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo)
|
|||||||
if (tbinfo->attisdropped[j])
|
if (tbinfo->attisdropped[j])
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* ALTER TABLE DROP COLUMN clears pg_attribute.atttypid,
|
* ALTER TABLE DROP COLUMN clears
|
||||||
* so we will not have gotten a valid type name; insert
|
* pg_attribute.atttypid, so we will not have gotten a
|
||||||
* INTEGER as a stopgap. We'll clean things up later.
|
* valid type name; insert INTEGER as a stopgap. We'll
|
||||||
|
* clean things up later.
|
||||||
*/
|
*/
|
||||||
appendPQExpBuffer(q, " INTEGER /* dummy */");
|
appendPQExpBuffer(q, " INTEGER /* dummy */");
|
||||||
/* Skip all the rest, too */
|
/* Skip all the rest, too */
|
||||||
@ -12912,8 +13041,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo)
|
|||||||
else if (!(tbinfo->reloftype && !binary_upgrade))
|
else if (!(tbinfo->reloftype && !binary_upgrade))
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* We must have a parenthesized attribute list, even though empty,
|
* We must have a parenthesized attribute list, even though
|
||||||
* when not using the OF TYPE syntax.
|
* empty, when not using the OF TYPE syntax.
|
||||||
*/
|
*/
|
||||||
appendPQExpBuffer(q, " (\n)");
|
appendPQExpBuffer(q, " (\n)");
|
||||||
}
|
}
|
||||||
@ -13853,8 +13982,8 @@ dumpSequence(Archive *fout, TableInfo *tbinfo)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* If the sequence is owned by a table column, emit the ALTER for it as a
|
* If the sequence is owned by a table column, emit the ALTER for it as a
|
||||||
* separate TOC entry immediately following the sequence's own entry.
|
* separate TOC entry immediately following the sequence's own entry. It's
|
||||||
* It's OK to do this rather than using full sorting logic, because the
|
* OK to do this rather than using full sorting logic, because the
|
||||||
* dependency that tells us it's owned will have forced the table to be
|
* dependency that tells us it's owned will have forced the table to be
|
||||||
* created first. We can't just include the ALTER in the TOC entry
|
* created first. We can't just include the ALTER in the TOC entry
|
||||||
* because it will fail if we haven't reassigned the sequence owner to
|
* because it will fail if we haven't reassigned the sequence owner to
|
||||||
@ -14859,9 +14988,9 @@ findDumpableDependencies(ArchiveHandle *AH, DumpableObject *dobj,
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* Object will not be dumped, so recursively consider its deps.
|
* Object will not be dumped, so recursively consider its deps. We
|
||||||
* We rely on the assumption that sortDumpableObjects already
|
* rely on the assumption that sortDumpableObjects already broke
|
||||||
* broke any dependency loops, else we might recurse infinitely.
|
* any dependency loops, else we might recurse infinitely.
|
||||||
*/
|
*/
|
||||||
DumpableObject *otherdobj = findObjectByDumpId(depid);
|
DumpableObject *otherdobj = findObjectByDumpId(depid);
|
||||||
|
|
||||||
@ -14884,22 +15013,21 @@ findDumpableDependencies(ArchiveHandle *AH, DumpableObject *dobj,
|
|||||||
*
|
*
|
||||||
* Whenever the selected schema is not pg_catalog, be careful to qualify
|
* Whenever the selected schema is not pg_catalog, be careful to qualify
|
||||||
* references to system catalogs and types in our emitted commands!
|
* references to system catalogs and types in our emitted commands!
|
||||||
|
*
|
||||||
|
* This function is called only from selectSourceSchemaOnAH and
|
||||||
|
* selectSourceSchema.
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
selectSourceSchema(Archive *fout, const char *schemaName)
|
selectSourceSchema(Archive *fout, const char *schemaName)
|
||||||
{
|
{
|
||||||
static char *curSchemaName = NULL;
|
|
||||||
PQExpBuffer query;
|
PQExpBuffer query;
|
||||||
|
|
||||||
|
/* This is checked by the callers already */
|
||||||
|
Assert(schemaName != NULL && *schemaName != '\0');
|
||||||
|
|
||||||
/* Not relevant if fetching from pre-7.3 DB */
|
/* Not relevant if fetching from pre-7.3 DB */
|
||||||
if (fout->remoteVersion < 70300)
|
if (fout->remoteVersion < 70300)
|
||||||
return;
|
return;
|
||||||
/* Ignore null schema names */
|
|
||||||
if (schemaName == NULL || *schemaName == '\0')
|
|
||||||
return;
|
|
||||||
/* Optimize away repeated selection of same schema */
|
|
||||||
if (curSchemaName && strcmp(curSchemaName, schemaName) == 0)
|
|
||||||
return;
|
|
||||||
|
|
||||||
query = createPQExpBuffer();
|
query = createPQExpBuffer();
|
||||||
appendPQExpBuffer(query, "SET search_path = %s",
|
appendPQExpBuffer(query, "SET search_path = %s",
|
||||||
@ -14910,9 +15038,6 @@ selectSourceSchema(Archive *fout, const char *schemaName)
|
|||||||
ExecuteSqlStatement(fout, query->data);
|
ExecuteSqlStatement(fout, query->data);
|
||||||
|
|
||||||
destroyPQExpBuffer(query);
|
destroyPQExpBuffer(query);
|
||||||
if (curSchemaName)
|
|
||||||
free(curSchemaName);
|
|
||||||
curSchemaName = pg_strdup(schemaName);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -15049,34 +15174,6 @@ myFormatType(const char *typname, int32 typmod)
|
|||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* fmtQualifiedId - convert a qualified name to the proper format for
|
|
||||||
* the source database.
|
|
||||||
*
|
|
||||||
* Like fmtId, use the result before calling again.
|
|
||||||
*/
|
|
||||||
static const char *
|
|
||||||
fmtQualifiedId(Archive *fout, const char *schema, const char *id)
|
|
||||||
{
|
|
||||||
static PQExpBuffer id_return = NULL;
|
|
||||||
|
|
||||||
if (id_return) /* first time through? */
|
|
||||||
resetPQExpBuffer(id_return);
|
|
||||||
else
|
|
||||||
id_return = createPQExpBuffer();
|
|
||||||
|
|
||||||
/* Suppress schema name if fetching from pre-7.3 DB */
|
|
||||||
if (fout->remoteVersion >= 70300 && schema && *schema)
|
|
||||||
{
|
|
||||||
appendPQExpBuffer(id_return, "%s.",
|
|
||||||
fmtId(schema));
|
|
||||||
}
|
|
||||||
appendPQExpBuffer(id_return, "%s",
|
|
||||||
fmtId(id));
|
|
||||||
|
|
||||||
return id_return->data;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return a column list clause for the given relation.
|
* Return a column list clause for the given relation.
|
||||||
*
|
*
|
||||||
@ -15084,37 +15181,31 @@ fmtQualifiedId(Archive *fout, const char *schema, const char *id)
|
|||||||
* "", not an invalid "()" column list.
|
* "", not an invalid "()" column list.
|
||||||
*/
|
*/
|
||||||
static const char *
|
static const char *
|
||||||
fmtCopyColumnList(const TableInfo *ti)
|
fmtCopyColumnList(const TableInfo *ti, PQExpBuffer buffer)
|
||||||
{
|
{
|
||||||
static PQExpBuffer q = NULL;
|
|
||||||
int numatts = ti->numatts;
|
int numatts = ti->numatts;
|
||||||
char **attnames = ti->attnames;
|
char **attnames = ti->attnames;
|
||||||
bool *attisdropped = ti->attisdropped;
|
bool *attisdropped = ti->attisdropped;
|
||||||
bool needComma;
|
bool needComma;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (q) /* first time through? */
|
appendPQExpBuffer(buffer, "(");
|
||||||
resetPQExpBuffer(q);
|
|
||||||
else
|
|
||||||
q = createPQExpBuffer();
|
|
||||||
|
|
||||||
appendPQExpBuffer(q, "(");
|
|
||||||
needComma = false;
|
needComma = false;
|
||||||
for (i = 0; i < numatts; i++)
|
for (i = 0; i < numatts; i++)
|
||||||
{
|
{
|
||||||
if (attisdropped[i])
|
if (attisdropped[i])
|
||||||
continue;
|
continue;
|
||||||
if (needComma)
|
if (needComma)
|
||||||
appendPQExpBuffer(q, ", ");
|
appendPQExpBuffer(buffer, ", ");
|
||||||
appendPQExpBuffer(q, "%s", fmtId(attnames[i]));
|
appendPQExpBuffer(buffer, "%s", fmtId(attnames[i]));
|
||||||
needComma = true;
|
needComma = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!needComma)
|
if (!needComma)
|
||||||
return ""; /* no undropped columns */
|
return ""; /* no undropped columns */
|
||||||
|
|
||||||
appendPQExpBuffer(q, ")");
|
appendPQExpBuffer(buffer, ")");
|
||||||
return q->data;
|
return buffer->data;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -252,6 +252,7 @@ typedef struct _tableInfo
|
|||||||
/* these two are set only if table is a sequence owned by a column: */
|
/* these two are set only if table is a sequence owned by a column: */
|
||||||
Oid owning_tab; /* OID of table owning sequence */
|
Oid owning_tab; /* OID of table owning sequence */
|
||||||
int owning_col; /* attr # of column owning sequence */
|
int owning_col; /* attr # of column owning sequence */
|
||||||
|
int relpages;
|
||||||
|
|
||||||
bool interesting; /* true if need to collect more data */
|
bool interesting; /* true if need to collect more data */
|
||||||
|
|
||||||
@ -315,6 +316,7 @@ typedef struct _indxInfo
|
|||||||
bool indisclustered;
|
bool indisclustered;
|
||||||
/* if there is an associated constraint object, its dumpId: */
|
/* if there is an associated constraint object, its dumpId: */
|
||||||
DumpId indexconstraint;
|
DumpId indexconstraint;
|
||||||
|
int relpages; /* relpages of the underlying table */
|
||||||
} IndxInfo;
|
} IndxInfo;
|
||||||
|
|
||||||
typedef struct _ruleInfo
|
typedef struct _ruleInfo
|
||||||
@ -532,6 +534,7 @@ extern void sortDumpableObjects(DumpableObject **objs, int numObjs,
|
|||||||
DumpId preBoundaryId, DumpId postBoundaryId);
|
DumpId preBoundaryId, DumpId postBoundaryId);
|
||||||
extern void sortDumpableObjectsByTypeName(DumpableObject **objs, int numObjs);
|
extern void sortDumpableObjectsByTypeName(DumpableObject **objs, int numObjs);
|
||||||
extern void sortDumpableObjectsByTypeOid(DumpableObject **objs, int numObjs);
|
extern void sortDumpableObjectsByTypeOid(DumpableObject **objs, int numObjs);
|
||||||
|
extern void sortDataAndIndexObjectsBySize(DumpableObject **objs, int numObjs);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* version specific routines
|
* version specific routines
|
||||||
|
@ -143,6 +143,96 @@ static void repairDependencyLoop(DumpableObject **loop,
|
|||||||
static void describeDumpableObject(DumpableObject *obj,
|
static void describeDumpableObject(DumpableObject *obj,
|
||||||
char *buf, int bufsize);
|
char *buf, int bufsize);
|
||||||
|
|
||||||
|
static int DOSizeCompare(const void *p1, const void *p2);
|
||||||
|
|
||||||
|
static int
|
||||||
|
findFirstEqualType(DumpableObjectType type, DumpableObject **objs, int numObjs)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < numObjs; i++)
|
||||||
|
if (objs[i]->objType == type)
|
||||||
|
return i;
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
findFirstDifferentType(DumpableObjectType type, DumpableObject **objs, int numObjs, int start)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = start; i < numObjs; i++)
|
||||||
|
if (objs[i]->objType != type)
|
||||||
|
return i;
|
||||||
|
return numObjs - 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* When we do a parallel dump, we want to start with the largest items first.
|
||||||
|
*
|
||||||
|
* Say we have the objects in this order:
|
||||||
|
* ....DDDDD....III....
|
||||||
|
*
|
||||||
|
* with D = Table data, I = Index, . = other object
|
||||||
|
*
|
||||||
|
* This sorting function now takes each of the D or I blocks and sorts them
|
||||||
|
* according to their size.
|
||||||
|
*/
|
||||||
|
void
|
||||||
|
sortDataAndIndexObjectsBySize(DumpableObject **objs, int numObjs)
|
||||||
|
{
|
||||||
|
int startIdx,
|
||||||
|
endIdx;
|
||||||
|
void *startPtr;
|
||||||
|
|
||||||
|
if (numObjs <= 1)
|
||||||
|
return;
|
||||||
|
|
||||||
|
startIdx = findFirstEqualType(DO_TABLE_DATA, objs, numObjs);
|
||||||
|
if (startIdx >= 0)
|
||||||
|
{
|
||||||
|
endIdx = findFirstDifferentType(DO_TABLE_DATA, objs, numObjs, startIdx);
|
||||||
|
startPtr = objs + startIdx;
|
||||||
|
qsort(startPtr, endIdx - startIdx, sizeof(DumpableObject *),
|
||||||
|
DOSizeCompare);
|
||||||
|
}
|
||||||
|
|
||||||
|
startIdx = findFirstEqualType(DO_INDEX, objs, numObjs);
|
||||||
|
if (startIdx >= 0)
|
||||||
|
{
|
||||||
|
endIdx = findFirstDifferentType(DO_INDEX, objs, numObjs, startIdx);
|
||||||
|
startPtr = objs + startIdx;
|
||||||
|
qsort(startPtr, endIdx - startIdx, sizeof(DumpableObject *),
|
||||||
|
DOSizeCompare);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
DOSizeCompare(const void *p1, const void *p2)
|
||||||
|
{
|
||||||
|
DumpableObject *obj1 = *(DumpableObject **) p1;
|
||||||
|
DumpableObject *obj2 = *(DumpableObject **) p2;
|
||||||
|
int obj1_size = 0;
|
||||||
|
int obj2_size = 0;
|
||||||
|
|
||||||
|
if (obj1->objType == DO_TABLE_DATA)
|
||||||
|
obj1_size = ((TableDataInfo *) obj1)->tdtable->relpages;
|
||||||
|
if (obj1->objType == DO_INDEX)
|
||||||
|
obj1_size = ((IndxInfo *) obj1)->relpages;
|
||||||
|
|
||||||
|
if (obj2->objType == DO_TABLE_DATA)
|
||||||
|
obj2_size = ((TableDataInfo *) obj2)->tdtable->relpages;
|
||||||
|
if (obj2->objType == DO_INDEX)
|
||||||
|
obj2_size = ((IndxInfo *) obj2)->relpages;
|
||||||
|
|
||||||
|
/* we want to see the biggest item go first */
|
||||||
|
if (obj1_size > obj2_size)
|
||||||
|
return -1;
|
||||||
|
if (obj2_size > obj1_size)
|
||||||
|
return 1;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Sort the given objects into a type/name-based ordering
|
* Sort the given objects into a type/name-based ordering
|
||||||
|
@ -1857,8 +1857,8 @@ connectDatabase(const char *dbname, const char *connection_string,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Ok, connected successfully. Remember the options used, in the form of
|
* Ok, connected successfully. Remember the options used, in the form of a
|
||||||
* a connection string.
|
* connection string.
|
||||||
*/
|
*/
|
||||||
connstr = constructConnStr(keywords, values);
|
connstr = constructConnStr(keywords, values);
|
||||||
|
|
||||||
|
@ -71,6 +71,7 @@ main(int argc, char **argv)
|
|||||||
RestoreOptions *opts;
|
RestoreOptions *opts;
|
||||||
int c;
|
int c;
|
||||||
int exit_code;
|
int exit_code;
|
||||||
|
int numWorkers = 1;
|
||||||
Archive *AH;
|
Archive *AH;
|
||||||
char *inputFileSpec;
|
char *inputFileSpec;
|
||||||
static int disable_triggers = 0;
|
static int disable_triggers = 0;
|
||||||
@ -182,7 +183,7 @@ main(int argc, char **argv)
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case 'j': /* number of restore jobs */
|
case 'j': /* number of restore jobs */
|
||||||
opts->number_of_jobs = atoi(optarg);
|
numWorkers = atoi(optarg);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'l': /* Dump the TOC summary */
|
case 'l': /* Dump the TOC summary */
|
||||||
@ -313,7 +314,7 @@ main(int argc, char **argv)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Can't do single-txn mode with multiple connections */
|
/* Can't do single-txn mode with multiple connections */
|
||||||
if (opts->single_txn && opts->number_of_jobs > 1)
|
if (opts->single_txn && numWorkers > 1)
|
||||||
{
|
{
|
||||||
fprintf(stderr, _("%s: cannot specify both --single-transaction and multiple jobs\n"),
|
fprintf(stderr, _("%s: cannot specify both --single-transaction and multiple jobs\n"),
|
||||||
progname);
|
progname);
|
||||||
@ -372,6 +373,18 @@ main(int argc, char **argv)
|
|||||||
if (opts->tocFile)
|
if (opts->tocFile)
|
||||||
SortTocFromFile(AH, opts);
|
SortTocFromFile(AH, opts);
|
||||||
|
|
||||||
|
/* See comments in pg_dump.c */
|
||||||
|
#ifdef WIN32
|
||||||
|
if (numWorkers > MAXIMUM_WAIT_OBJECTS)
|
||||||
|
{
|
||||||
|
fprintf(stderr, _("%s: maximum number of parallel jobs is %d\n"),
|
||||||
|
progname, MAXIMUM_WAIT_OBJECTS);
|
||||||
|
exit(1);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
AH->numWorkers = numWorkers;
|
||||||
|
|
||||||
if (opts->tocSummary)
|
if (opts->tocSummary)
|
||||||
PrintTOCSummary(AH, opts);
|
PrintTOCSummary(AH, opts);
|
||||||
else
|
else
|
||||||
|
@ -395,6 +395,7 @@ sub mkvcbuild
|
|||||||
$psql->AddIncludeDir('src\bin\pg_dump');
|
$psql->AddIncludeDir('src\bin\pg_dump');
|
||||||
$psql->AddIncludeDir('src\backend');
|
$psql->AddIncludeDir('src\backend');
|
||||||
$psql->AddFile('src\bin\psql\psqlscan.l');
|
$psql->AddFile('src\bin\psql\psqlscan.l');
|
||||||
|
$psql->AddLibrary('ws2_32.lib');
|
||||||
|
|
||||||
my $pgdump = AddSimpleFrontend('pg_dump', 1);
|
my $pgdump = AddSimpleFrontend('pg_dump', 1);
|
||||||
$pgdump->AddIncludeDir('src\backend');
|
$pgdump->AddIncludeDir('src\backend');
|
||||||
@ -403,6 +404,7 @@ sub mkvcbuild
|
|||||||
$pgdump->AddFile('src\bin\pg_dump\pg_dump_sort.c');
|
$pgdump->AddFile('src\bin\pg_dump\pg_dump_sort.c');
|
||||||
$pgdump->AddFile('src\bin\pg_dump\keywords.c');
|
$pgdump->AddFile('src\bin\pg_dump\keywords.c');
|
||||||
$pgdump->AddFile('src\backend\parser\kwlookup.c');
|
$pgdump->AddFile('src\backend\parser\kwlookup.c');
|
||||||
|
$pgdump->AddLibrary('ws2_32.lib');
|
||||||
|
|
||||||
my $pgdumpall = AddSimpleFrontend('pg_dump', 1);
|
my $pgdumpall = AddSimpleFrontend('pg_dump', 1);
|
||||||
|
|
||||||
@ -419,6 +421,7 @@ sub mkvcbuild
|
|||||||
$pgdumpall->AddFile('src\bin\pg_dump\dumputils.c');
|
$pgdumpall->AddFile('src\bin\pg_dump\dumputils.c');
|
||||||
$pgdumpall->AddFile('src\bin\pg_dump\keywords.c');
|
$pgdumpall->AddFile('src\bin\pg_dump\keywords.c');
|
||||||
$pgdumpall->AddFile('src\backend\parser\kwlookup.c');
|
$pgdumpall->AddFile('src\backend\parser\kwlookup.c');
|
||||||
|
$pgdumpall->AddLibrary('ws2_32.lib');
|
||||||
|
|
||||||
my $pgrestore = AddSimpleFrontend('pg_dump', 1);
|
my $pgrestore = AddSimpleFrontend('pg_dump', 1);
|
||||||
$pgrestore->{name} = 'pg_restore';
|
$pgrestore->{name} = 'pg_restore';
|
||||||
@ -426,6 +429,7 @@ sub mkvcbuild
|
|||||||
$pgrestore->AddFile('src\bin\pg_dump\pg_restore.c');
|
$pgrestore->AddFile('src\bin\pg_dump\pg_restore.c');
|
||||||
$pgrestore->AddFile('src\bin\pg_dump\keywords.c');
|
$pgrestore->AddFile('src\bin\pg_dump\keywords.c');
|
||||||
$pgrestore->AddFile('src\backend\parser\kwlookup.c');
|
$pgrestore->AddFile('src\backend\parser\kwlookup.c');
|
||||||
|
$pgrestore->AddLibrary('ws2_32.lib');
|
||||||
|
|
||||||
my $zic = $solution->AddProject('zic', 'exe', 'utils');
|
my $zic = $solution->AddProject('zic', 'exe', 'utils');
|
||||||
$zic->AddFiles('src\timezone', 'zic.c', 'ialloc.c', 'scheck.c',
|
$zic->AddFiles('src\timezone', 'zic.c', 'ialloc.c', 'scheck.c',
|
||||||
@ -572,6 +576,7 @@ sub mkvcbuild
|
|||||||
$proj->AddIncludeDir('src\bin\psql');
|
$proj->AddIncludeDir('src\bin\psql');
|
||||||
$proj->AddReference($libpq, $libpgport, $libpgcommon);
|
$proj->AddReference($libpq, $libpgport, $libpgcommon);
|
||||||
$proj->AddResourceFile('src\bin\scripts', 'PostgreSQL Utility');
|
$proj->AddResourceFile('src\bin\scripts', 'PostgreSQL Utility');
|
||||||
|
$proj->AddLibrary('ws2_32.lib');
|
||||||
}
|
}
|
||||||
|
|
||||||
# Regression DLL and EXE
|
# Regression DLL and EXE
|
||||||
|
Loading…
x
Reference in New Issue
Block a user