Switch the regression tests of pg_upgrade to use TAP tests

This simplifies a lot of code in the tests of pg_upgrade without
sacrificing its coverage:
- Removal of test.sh used for builds with make, that has accumulated
over the years tweaks for problems that are solved in a duplicated way
by the centralized TAP framework (initialization of the various
environment variables PG*, port selection).
- Removal of the code in MSVC to test pg_upgrade.  This was roughly a
duplicate of test.sh adapted for Windows, with an extra footprint of
a pg_regress command and all the assumptions behind it.

Support for upgrades with older versions is changed, not removed.
test.sh was able to set up the regression database on the old instance
by launching itself the pg_regress command and a dependency to the
source tree of thd old cluster, with tweaks on the command arguments to
adapt across the versions used.  This created a backward-compatibility
dependency with older pg_regress commands, and recent changes like
d1029bb have made that much more complicated.

Instead, this commit allows tests with older major versions by
specifying a path to a SQL dump (taken with pg_dumpall from the old
cluster's installation) that will be loaded into the old instance to
upgrade instead of running pg_regress, through an optional environment
variable called $olddump.  This requires a second variable called
$oldinstall to point to the base path of the installation of the old
cluster.  This method is more in line with the buildfarm client that
uses a set of static dumps to set up an old instance, so hopefully we
will be able to reuse what is introduced in this commit there.  The last
step of the tests that checks for differences between the two dumps
taken still needs to be improved as it can fail, requiring a manual
lookup at the dumps.  This is not different from the old way of testing
where things could fail at the last step.

Support for EXTRA_REGRESS_OPTS is kept.  vcregress.pl in the MSVC
scripts still handles the test of pg_upgrade with its upgradecheck, and
bincheck is changed to skip pg_upgrade.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Andres Freund, Rachel Heaton, Tom Lane,
Discussion: https://postgr.es/m/YJ8xTmLQkotVLpN5@paquier.xyz
This commit is contained in:
Michael Paquier 2022-04-01 10:13:50 +09:00
parent fb691bbb4c
commit 322becb608
6 changed files with 288 additions and 434 deletions

View File

@ -28,6 +28,10 @@ OBJS = \
override CPPFLAGS := -I$(srcdir) -I$(libpq_srcdir) $(CPPFLAGS)
LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
# required for 002_pg_upgrade.pl
REGRESS_SHLIB=$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)
export REGRESS_SHLIB
all: pg_upgrade
pg_upgrade: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
@ -47,17 +51,8 @@ clean distclean maintainer-clean:
rm -rf delete_old_cluster.sh log/ tmp_check/ \
reindex_hash.sql
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
# e.g. prevents output synchronization from working (as make thinks
# that the subsidiary make knows how to deal with that itself, but
# we're invoking a shell script that doesn't know). Referencing
# $(MAKE) indirectly avoids that behaviour.
# See https://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html#MAKE-Variable
NOTSUBMAKEMAKE=$(MAKE)
check:
$(prove_check)
check: test.sh all temp-install
MAKE=$(NOTSUBMAKEMAKE) $(with_temp_install) bindir=$(abs_top_builddir)/tmp_install/$(bindir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $<
# installcheck is not supported because there's no meaningful way to test
# pg_upgrade against a single already-running server
installcheck:
$(prove_installcheck)

View File

@ -2,25 +2,22 @@ THE SHORT VERSION
-----------------
On non-Windows machines, you can execute the testing process
described below by running
described below by running the following command in this directory:
make check
in this directory. This will run the shell script test.sh, performing
an upgrade from the version in this source tree to a new instance of
the same version.
To test an upgrade from a different version, you must have a built
source tree for the old version as well as this version, and you
must have done "make install" for both versions. Then do:
This will run the TAP tests to run pg_upgrade, performing an upgrade
from the version in this source tree to a new instance of the same
version.
export oldsrc=...somewhere/postgresql (old version's source tree)
export oldbindir=...otherversion/bin (old version's installed bin dir)
export bindir=...thisversion/bin (this version's installed bin dir)
export libdir=...thisversion/lib (this version's installed lib dir)
sh test.sh
In this case, you will have to manually eyeball the resulting dump
diff for version-specific differences, as explained below.
Testing an upgrade from a different version requires a dump to set up
the contents of this instance, with its set of binaries. The following
variables are available to control the test (see DETAILS below about
the creation of the dump):
export olddump=...somewhere/dump.sql (old version's dump)
export oldinstall=...otherversion/ (old version's install base path)
Finally, the tests can be done by running
make check
DETAILS
-------
@ -29,51 +26,22 @@ The most effective way to test pg_upgrade, aside from testing on user
data, is by upgrading the PostgreSQL regression database.
This testing process first requires the creation of a valid regression
database dump. Such files contain most database features and are
specific to each major version of Postgres.
database dump that can be then used for $olddump. Such files contain
most database features and are specific to each major version of Postgres.
Here are the steps needed to create a regression database dump file:
Here are the steps needed to create a dump file:
1) Create and populate the regression database in the old cluster.
This database can be created by running 'make installcheck' from
src/test/regress.
src/test/regress using its source code tree.
2) Use pg_dump to dump out the regression database. Use the new
cluster's pg_dump on the old database to minimize whitespace
differences in the diff.
2) Use pg_dumpall to dump out the contents of the instance, including the
regression database, in the shape of a SQL file. This requires the *old*
cluster's pg_dumpall so as the dump created is compatible with the
version of the cluster it is dumped into.
3) Adjust the regression database dump file
a) Perform the load/dump twice
This fixes problems with the ordering of COPY columns for
inherited tables.
b) Change CREATE FUNCTION shared object paths to use '$libdir'
The old and new cluster will have different shared object paths.
c) Fix any wrapping format differences
Commands like CREATE TRIGGER and ALTER TABLE sometimes have
differences.
Once the dump is created, it can be repeatedly loaded into the old
database, upgraded, and dumped out of the new database, and then
compared to the original version. To test the dump file, perform these
steps:
1) Create the old and new clusters in different directories.
2) Copy the regression shared object files into the appropriate /lib
directory for old and new clusters.
3) Create the regression database in the old server.
4) Load the dump file created above into the regression database;
check for errors while loading.
5) Upgrade the old database to the new major version, as outlined in
the pg_upgrade manual section.
6) Use pg_dump to dump out the regression database in the new cluster.
7) Diff the regression database dump file with the regression dump
file loaded into the old server.
Once the dump is created, it can be repeatedly used with $olddump and
`make check`, that automates the dump of the old database, its upgrade,
the dump out of the new database and the comparison of the dumps between
the old and new databases. The contents of the dumps can also be manually
compared.

View File

@ -0,0 +1,11 @@
use strict;
use warnings;
use PostgreSQL::Test::Utils;
use Test::More;
program_help_ok('pg_upgrade');
program_version_ok('pg_upgrade');
program_options_handling_ok('pg_upgrade');
done_testing();

View File

@ -0,0 +1,237 @@
# Set of tests for pg_upgrade, including cross-version checks.
use strict;
use warnings;
use Cwd qw(abs_path getcwd);
use File::Basename qw(dirname);
use PostgreSQL::Test::Cluster;
use PostgreSQL::Test::Utils;
use Test::More;
# Generate a database with a name made of a range of ASCII characters.
sub generate_db
{
my ($node, $from_char, $to_char) = @_;
my $dbname = '';
for my $i ($from_char .. $to_char)
{
next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
$dbname = $dbname . sprintf('%c', $i);
}
$node->run_log(
[ 'createdb', '--host', $node->host, '--port', $node->port, $dbname ]
);
}
# The test of pg_upgrade requires two clusters, an old one and a new one
# that gets upgraded. Before running the upgrade, a logical dump of the
# old cluster is taken, and a second logical dump of the new one is taken
# after the upgrade. The upgrade test passes if there are no differences
# in these two dumps.
# Testing upgrades with an older version of PostgreSQL requires setting up
# two environment variables, as of:
# - "olddump", to point to a dump file that will be used to set up the old
# instance to upgrade from.
# - "oldinstall", to point to the installation path of the old cluster.
if ( (defined($ENV{olddump}) && !defined($ENV{oldinstall}))
|| (!defined($ENV{olddump}) && defined($ENV{oldinstall})))
{
# Not all variables are defined, so leave and die if test is
# done with an older installation.
die "olddump or oldinstall is undefined";
}
# Temporary location for the dumps taken
my $tempdir = PostgreSQL::Test::Utils::tempdir;
# Initialize node to upgrade
my $oldnode = PostgreSQL::Test::Cluster->new('old_node',
install_path => $ENV{oldinstall});
# To increase coverage of non-standard segment size and group access without
# increasing test runtime, run these tests with a custom setting.
# --allow-group-access and --wal-segsize have been added in v11.
$oldnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);
$oldnode->start;
# The default location of the source code is the root of this directory.
my $srcdir = abs_path("../../..");
# Set up the data of the old instance with a dump or pg_regress.
if (defined($ENV{olddump}))
{
# Use the dump specified.
my $olddumpfile = $ENV{olddump};
die "no dump file found!" unless -e $olddumpfile;
# Load the dump using the "postgres" database as "regression" does
# not exist yet, and we are done here.
$oldnode->command_ok(
[
'psql', '-X', '-f', $olddumpfile,
'--port', $oldnode->port, '--host', $oldnode->host,
'postgres'
]);
}
else
{
# Default is to use pg_regress to set up the old instance.
# Create databases with names covering most ASCII bytes
generate_db($oldnode, 1, 45);
generate_db($oldnode, 46, 90);
generate_db($oldnode, 91, 127);
# Grab any regression options that may be passed down by caller.
my $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || "";
my @extra_opts = split(/\s+/, $extra_opts_val);
# --dlpath is needed to be able to find the location of regress.so
# and any libraries the regression tests require.
my $dlpath = dirname($ENV{REGRESS_SHLIB});
# --outputdir points to the path where to place the output files.
my $outputdir = $PostgreSQL::Test::Utils::tmp_check;
# --inputdir points to the path of the input files.
my $inputdir = "$srcdir/src/test/regress";
my @regress_command = [
$ENV{PG_REGRESS}, @extra_opts,
'--dlpath', $dlpath,
'--max-concurrent-tests', '20',
'--bindir=', '--host',
$oldnode->host, '--port',
$oldnode->port, '--schedule',
"$srcdir/src/test/regress/parallel_schedule", '--outputdir',
$outputdir, '--inputdir',
$inputdir
];
$oldnode->command_ok(@regress_command,
'regression test run on old instance');
}
# Before dumping, get rid of objects not existing or not supported in later
# versions. This depends on the version of the old server used, and matters
# only if different major versions are used for the dump.
if (defined($ENV{oldinstall}))
{
# Note that upgrade_adapt.sql from the new version is used, to
# cope with an upgrade to this version.
$oldnode->run_log(
[
'psql', '-X',
'-f', "$srcdir/src/bin/pg_upgrade/upgrade_adapt.sql",
'--port', $oldnode->port,
'--host', $oldnode->host,
'regression'
]);
}
# Initialize a new node for the upgrade.
my $newnode = PostgreSQL::Test::Cluster->new('new_node');
$newnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);
my $newbindir = $newnode->config_data('--bindir');
my $oldbindir = $oldnode->config_data('--bindir');
# Take a dump before performing the upgrade as a base comparison. Note
# that we need to use pg_dumpall from the new node here.
$newnode->command_ok(
[
'pg_dumpall', '--no-sync',
'-d', $oldnode->connstr('postgres'),
'-f', "$tempdir/dump1.sql"
],
'dump before running pg_upgrade');
# After dumping, update references to the old source tree's regress.so
# to point to the new tree.
if (defined($ENV{oldinstall}))
{
# First, fetch all the references to libraries that are not part
# of the default path $libdir.
my $output = $oldnode->safe_psql('regression',
"SELECT DISTINCT probin::text FROM pg_proc WHERE probin NOT LIKE '\$libdir%';"
);
chomp($output);
my @libpaths = split("\n", $output);
my $dump_data = slurp_file("$tempdir/dump1.sql");
my $newregresssrc = "$srcdir/src/test/regress";
foreach (@libpaths)
{
my $libpath = $_;
$libpath = dirname($libpath);
$dump_data =~ s/$libpath/$newregresssrc/g;
}
open my $fh, ">", "$tempdir/dump1.sql" or die "could not open dump file";
print $fh $dump_data;
close $fh;
# This replaces any references to the old tree's regress.so
# the new tree's regress.so. Any references that do *not*
# match $libdir are switched so as this request does not
# depend on the path of the old source tree. This is useful
# when using an old dump. Do the operation on all the databases
# that allow connections so as this includes the regression
# database and anything the user has set up.
$output = $oldnode->safe_psql('postgres',
"SELECT datname FROM pg_database WHERE datallowconn;");
chomp($output);
my @datnames = split("\n", $output);
foreach (@datnames)
{
my $datname = $_;
$oldnode->safe_psql(
$datname, "UPDATE pg_proc SET probin =
regexp_replace(probin, '.*/', '$newregresssrc/')
WHERE probin NOT LIKE '\$libdir/%'");
}
}
# Upgrade the instance.
$oldnode->stop;
command_ok(
[
'pg_upgrade', '--no-sync', '-d', $oldnode->data_dir,
'-D', $newnode->data_dir, '-b', $oldbindir,
'-B', $newbindir, '-p', $oldnode->port,
'-P', $newnode->port
],
'run of pg_upgrade for new instance');
$newnode->start;
# Check if there are any logs coming from pg_upgrade, that would only be
# retained on failure.
my $log_path = $newnode->data_dir . "/pg_upgrade_output.d/log";
if (-d $log_path)
{
foreach my $log (glob("$log_path/*"))
{
note "###########################";
note "Contents of log file $log";
note "###########################";
my $log_contents = slurp_file($log);
print "$log_contents\n";
}
}
# Second dump from the upgraded instance.
$newnode->run_log(
[
'pg_dumpall', '--no-sync',
'-d', $newnode->connstr('postgres'),
'-f', "$tempdir/dump2.sql"
]);
# Compare the two dumps, there should be no differences.
command_ok([ 'diff', '-q', "$tempdir/dump1.sql", "$tempdir/dump2.sql" ],
'old and new dump match after pg_upgrade');
done_testing();

View File

@ -1,279 +0,0 @@
#!/bin/sh
# src/bin/pg_upgrade/test.sh
#
# Test driver for pg_upgrade. Initializes a new database cluster,
# runs the regression tests (to put in some data), runs pg_dumpall,
# runs pg_upgrade, runs pg_dumpall again, compares the dumps.
#
# Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
# Portions Copyright (c) 1994, Regents of the University of California
set -e
: ${MAKE=make}
# Guard against parallel make issues (see comments in pg_regress.c)
unset MAKEFLAGS
unset MAKELEVEL
# Run a given "initdb" binary and overlay the regression testing
# authentication configuration.
standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
# --allow-group-access and --wal-segsize have been added in v11.
"$1" -N --wal-segsize 1 --allow-group-access -A trust
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
fi
../../test/regress/pg_regress --config-auth "$PGDATA"
}
# What flavor of host are we on?
# Treat MINGW* (msys1) and MSYS* (msys2) the same.
testhost=`uname -s | sed 's/^MSYS/MINGW/'`
# Establish how the server will listen for connections
case $testhost in
MINGW*)
LISTEN_ADDRESSES="localhost"
PG_REGRESS_SOCKET_DIR=""
PGHOST=localhost
;;
*)
LISTEN_ADDRESSES=""
# Select a socket directory. The algorithm is from the "configure"
# script; the outcome mimics pg_regress.c:make_temp_sockdir().
if [ x"$PG_REGRESS_SOCKET_DIR" = x ]; then
set +e
dir=`(umask 077 &&
mktemp -d /tmp/pg_upgrade_check-XXXXXX) 2>/dev/null`
if [ ! -d "$dir" ]; then
dir=/tmp/pg_upgrade_check-$$-$RANDOM
(umask 077 && mkdir "$dir")
if [ ! -d "$dir" ]; then
echo "could not create socket temporary directory in \"/tmp\""
exit 1
fi
fi
set -e
PG_REGRESS_SOCKET_DIR=$dir
trap 'rm -rf "$PG_REGRESS_SOCKET_DIR"' 0
trap 'exit 3' 1 2 13 15
fi
PGHOST=$PG_REGRESS_SOCKET_DIR
;;
esac
POSTMASTER_OPTS="-F -c listen_addresses=\"$LISTEN_ADDRESSES\" -k \"$PG_REGRESS_SOCKET_DIR\""
export PGHOST
# don't rely on $PWD here, as old shells don't set it
temp_root=`pwd`/tmp_check
rm -rf "$temp_root"
mkdir "$temp_root"
: ${oldbindir=$bindir}
: ${oldsrc=../../..}
oldsrc=`cd "$oldsrc" && pwd`
newsrc=`cd ../../.. && pwd`
# We need to make pg_regress use psql from the desired installation
# (likely a temporary one), because otherwise the installcheck run
# below would try to use psql from the proper installation directory
# of the target version, which might be outdated or not exist. But
# don't override anything else that's already in EXTRA_REGRESS_OPTS.
EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --bindir='$oldbindir'"
export EXTRA_REGRESS_OPTS
# While in normal cases this will already be set up, adding bindir to
# path allows test.sh to be invoked with different versions as
# described in ./TESTING
PATH=$bindir:$PATH
export PATH
BASE_PGDATA="$temp_root/data"
PGDATA="${BASE_PGDATA}.old"
export PGDATA
# Send installcheck outputs to a private directory. This avoids conflict when
# check-world runs pg_upgrade check concurrently with src/test/regress check.
# To retrieve interesting files after a run, use pattern tmp_check/*/*.diffs.
outputdir="$temp_root/regress"
EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --outputdir=$outputdir"
export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
# pg_regress --make-tablespacedir would take care of that in 14~, but this is
# still required for older versions where this option is not supported.
if [ "$newsrc" != "$oldsrc" ]; then
mkdir "$outputdir"/testtablespace
mkdir "$outputdir"/sql
mkdir "$outputdir"/expected
fi
logdir=`pwd`/log
rm -rf "$logdir"
mkdir "$logdir"
# Clear out any environment vars that might cause libpq to connect to
# the wrong postmaster (cf pg_regress.c)
#
# Some shells, such as NetBSD's, return non-zero from unset if the variable
# is already unset. Since we are operating under 'set -e', this causes the
# script to fail. To guard against this, set them all to an empty string first.
PGDATABASE=""; unset PGDATABASE
PGUSER=""; unset PGUSER
PGSERVICE=""; unset PGSERVICE
PGSSLMODE=""; unset PGSSLMODE
PGREQUIRESSL=""; unset PGREQUIRESSL
PGCONNECT_TIMEOUT=""; unset PGCONNECT_TIMEOUT
PGHOSTADDR=""; unset PGHOSTADDR
# Select a non-conflicting port number, similarly to pg_regress.c
PG_VERSION_NUM=`grep '#define PG_VERSION_NUM' "$newsrc"/src/include/pg_config.h | awk '{print $3}'`
PGPORT=`expr $PG_VERSION_NUM % 16384 + 49152`
export PGPORT
i=0
while psql -X postgres </dev/null 2>/dev/null
do
i=`expr $i + 1`
if [ $i -eq 16 ]
then
echo port $PGPORT apparently in use
exit 1
fi
PGPORT=`expr $PGPORT + 1`
export PGPORT
done
# buildfarm may try to override port via EXTRA_REGRESS_OPTS ...
EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --port=$PGPORT"
export EXTRA_REGRESS_OPTS
standard_initdb "$oldbindir"/initdb
"$oldbindir"/pg_ctl start -l "$logdir/postmaster1.log" -o "$POSTMASTER_OPTS" -w
# Create databases with names covering the ASCII bytes other than NUL, BEL,
# LF, or CR. BEL would ring the terminal bell in the course of this test, and
# it is not otherwise a special case. PostgreSQL doesn't support the rest.
dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)
if (i != 7 && i != 10 && i != 13) printf "%c", i }' </dev/null`
# Exercise backslashes adjacent to double quotes, a Windows special case.
dbname1='\"\'$dbname1'\\"\\\'
dbname2=`awk 'BEGIN { for (i = 46; i < 91; i++) printf "%c", i }' </dev/null`
dbname3=`awk 'BEGIN { for (i = 91; i < 128; i++) printf "%c", i }' </dev/null`
createdb "regression$dbname1" || createdb_status=$?
createdb "regression$dbname2" || createdb_status=$?
createdb "regression$dbname3" || createdb_status=$?
# Extra options to apply to the dump. This may be changed later.
extra_dump_options=""
if "$MAKE" -C "$oldsrc" installcheck-parallel; then
oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
# Before dumping, tweak the database of the old instance depending
# on its version.
if [ "$newsrc" != "$oldsrc" ]; then
# This SQL script has its own idea of the cleanup that needs to be
# done on the cluster to-be-upgraded, and includes version checks.
# Note that this uses the script stored on the new branch.
psql -X -d regression -f "$newsrc/src/bin/pg_upgrade/upgrade_adapt.sql" \
|| psql_fix_sql_status=$?
# Handling of --extra-float-digits gets messy after v12.
# Note that this changes the dumps from the old and new
# instances if involving an old cluster of v11 or older.
if [ $oldpgversion -lt 120000 ]; then
extra_dump_options="--extra-float-digits=0"
fi
fi
pg_dumpall $extra_dump_options --no-sync \
-f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
fix_sql=""
case $oldpgversion in
*)
fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE '$oldsrc%';"
;;
esac
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
mv "$temp_root"/dump1.sql "$temp_root"/dump1.sql.orig
sed "s;$oldsrc;$newsrc;g" "$temp_root"/dump1.sql.orig >"$temp_root"/dump1.sql
fi
else
make_installcheck_status=$?
fi
"$oldbindir"/pg_ctl -m fast stop
if [ -n "$createdb_status" ]; then
exit 1
fi
if [ -n "$make_installcheck_status" ]; then
exit 1
fi
if [ -n "$psql_fix_sql_status" ]; then
exit 1
fi
if [ -n "$pg_dumpall1_status" ]; then
echo "pg_dumpall of pre-upgrade database cluster failed"
exit 1
fi
PGDATA="$BASE_PGDATA"
standard_initdb 'initdb'
pg_upgrade $PG_UPGRADE_OPTS --no-sync -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "$PGPORT" -P "$PGPORT"
# make sure all directories and files have group permissions, on Unix hosts
# Windows hosts don't support Unix-y permissions.
case $testhost in
MINGW*|CYGWIN*) ;;
*) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
echo "files in PGDATA with permission != 640";
exit 1;
fi ;;
esac
case $testhost in
MINGW*|CYGWIN*) ;;
*) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
echo "directories in PGDATA with permission != 750";
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
pg_dumpall $extra_dump_options --no-sync \
-f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
echo "pg_dumpall of post-upgrade database cluster failed"
exit 1
fi
case $testhost in
MINGW*) MSYS2_ARG_CONV_EXCL=/c cmd /c delete_old_cluster.bat ;;
*) sh ./delete_old_cluster.sh ;;
esac
if diff "$temp_root"/dump1.sql "$temp_root"/dump2.sql >/dev/null; then
echo PASSED
exit 0
else
echo "Files $temp_root/dump1.sql and $temp_root/dump2.sql differ"
echo "dumps were not identical"
exit 1
fi

View File

@ -286,6 +286,10 @@ sub bincheck
foreach my $dir (@bin_dirs)
{
next unless -d "$dir/t";
# Do not consider pg_upgrade, as it is handled by
# upgradecheck.
next if ($dir =~ "/pg_upgrade/");
my $status = tap_check($dir);
$mstat ||= $status;
}
@ -516,91 +520,9 @@ sub generate_db
sub upgradecheck
{
my $status;
my $cwd = getcwd();
# Much of this comes from the pg_upgrade test.sh script,
# but it only covers the --install case, and not the case
# where the old and new source or bin dirs are different.
# i.e. only this version to this version check. That's
# what pg_upgrade's "make check" does.
$ENV{PGHOST} = 'localhost';
$ENV{PGPORT} ||= 50432;
my $tmp_root = "$topdir/src/bin/pg_upgrade/tmp_check";
rmtree($tmp_root);
mkdir $tmp_root || die $!;
my $upg_tmp_install = "$tmp_root/install"; # unshared temp install
print "Setting up temp install\n\n";
Install($upg_tmp_install, "all", $config);
# Install does a chdir, so change back after that
chdir $cwd;
my ($bindir, $libdir, $oldsrc, $newsrc) =
("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
$ENV{PATH} = "$bindir;$ENV{PATH}";
my $data = "$tmp_root/data";
$ENV{PGDATA} = "$data.old";
my $outputdir = "$tmp_root/regress";
my @EXTRA_REGRESS_OPTS = ("--outputdir=$outputdir");
mkdir "$outputdir" || die $!;
my $logdir = "$topdir/src/bin/pg_upgrade/log";
rmtree($logdir);
mkdir $logdir || die $!;
print "\nRunning initdb on old cluster\n\n";
standard_initdb() or exit 1;
print "\nStarting old cluster\n\n";
my @args = ('pg_ctl', 'start', '-l', "$logdir/postmaster1.log");
system(@args) == 0 or exit 1;
print "\nCreating databases with names covering most ASCII bytes\n\n";
generate_db("\\\"\\", 1, 45, "\\\\\"\\\\\\");
generate_db('', 46, 90, '');
generate_db('', 91, 127, '');
print "\nSetting up data for upgrading\n\n";
installcheck_internal('parallel', @EXTRA_REGRESS_OPTS);
# now we can chdir into the source dir
chdir "$topdir/src/bin/pg_upgrade";
print "\nDumping old cluster\n\n";
@args = ('pg_dumpall', '-f', "$tmp_root/dump1.sql");
system(@args) == 0 or exit 1;
print "\nStopping old cluster\n\n";
system("pg_ctl stop") == 0 or exit 1;
$ENV{PGDATA} = "$data";
print "\nSetting up new cluster\n\n";
standard_initdb() or exit 1;
print "\nRunning pg_upgrade\n\n";
@args = (
'pg_upgrade', '-d', "$data.old", '-D', $data, '-b', $bindir,
'--no-sync');
system(@args) == 0 or exit 1;
print "\nStarting new cluster\n\n";
@args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
system(@args) == 0 or exit 1;
print "\nDumping new cluster\n\n";
@args = ('pg_dumpall', '-f', "$tmp_root/dump2.sql");
system(@args) == 0 or exit 1;
print "\nStopping new cluster\n\n";
system("pg_ctl stop") == 0 or exit 1;
print "\nDeleting old cluster\n\n";
system(".\\delete_old_cluster.bat") == 0 or exit 1;
print "\nComparing old and new cluster dumps\n\n";
@args = ('diff', '-q', "$tmp_root/dump1.sql", "$tmp_root/dump2.sql");
system(@args);
$status = $?;
if (!$status)
{
print "PASSED\n";
}
else
{
print "dumps not identical!\n";
exit(1);
}
InstallTemp();
my $mstat = tap_check("$topdir/src/bin/pg_upgrade");
exit $mstat if $mstat;
return;
}