mirror of https://github.com/postgres/postgres
e3b37462c2 | ||
---|---|---|
.. | ||
INSTALL | ||
Makefile | ||
Makefile.out | ||
README.pg_dumplo | ||
lo_export.c | ||
lo_import.c | ||
main.c | ||
pg_dumplo.h | ||
utils.c |
README.pg_dumplo
How to use pg_dumplo? ===================== (c) 2000, Pavel Janík ml. <Pavel.Janik@linux.cz> Q: How do you use pg_dumplo? ============================ A: This is a small demo of backing up the database table with Large Objects: We will create a demo database and a small and useless table `lo' inside it: SnowWhite:$ createdb test CREATE DATABASE Ok, our database with the name 'test' is created. Now we should create demo table which will contain only one column with the name 'id' which will hold the oid number of Large Object: SnowWhite:$ psql test Welcome to psql, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit test=# CREATE TABLE lo (id oid); CREATE test=# \lo_import /etc/aliases lo_import 19338 test=# INSERT INTO lo VALUES (19338); INSERT 19352 1 test=# select * from lo; id ------- 19338 (1 row) test=# \q In the above example you can see that we have also imported one "Large Object" - the file /etc/aliases. It has an oid of 19338 so we have inserted this oid number to the database table lo to the column id. The final SELECT shows that we have one record in the table. Now we can demonstrate the work of pg_dumplo. We will create dump directory which will contain the whole dump of large objects (/tmp/dump): mkdir -p /tmp/dump Now we can dump all large objects from the database `test' which has an oid stored in the column `id' in the table `lo': SnowWhite:$ pg_dumplo -s /tmp/dump -d test -l lo.id pg_dumplo: dump lo.id (1 large obj) Voila, we have the dump of all Large Objects in our directory: SnowWhite:$ tree /tmp/dump/ /tmp/dump/ `-- test |-- lo | `-- id | `-- 19338 `-- lo_dump.index 3 directories, 2 files SnowWhite:$ Isn't it nice :-) Yes, it is, but we are on the half of our way. We should also be able to recreate the contents of the table lo and the Large Object database when something went wrong. It is very easy, we will demonstrate this via dropping the database and recreating it from scratch with pg_dumplo: SnowwWite:$ dropdb test DROP DATABASE SnowWhite:$ createdb test CREATE DATABASE Ok, our database with the name `test' is created again. We should also create the table `lo' again: SnowWhite:$ psql test Welcome to psql, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit test=# CREATE TABLE lo (id oid); CREATE test=# \q SnowWhite:$ Now the database with the table `lo' is created again, but we do not have any information stored in it. But have the dump of complete Large Object database, so we can recreate the contents of the whole database from the directory /tmp/dump: SnowWhite:$ pg_dumplo -s /tmp/dump -d test -i 19338 lo id test/lo/id/19338 SnowWhite:$ And this is everything. Summary: In this small example we have shown that pg_dumplo can be used to completely dump the database's Large Objects very easily.