# # Run this Tcl script to generate the speed.html file. # set rcsid {$Id: speed.tcl,v 1.4 2001/10/31 15:44:48 drh Exp $ } puts { Database Speed Comparison: SQLite versus PostgreSQL

Database Speed Comparison

} puts "

(This page was last modified on [lrange $rcsid 3 4] GMT)

" puts {

Executive Summary

A series of tests are run to measure the relative performance of SQLite version 1.0 and 2.0 and PostgreSQL version 6.4. The following are general conclusions drawn from these experiments:

Test Environment

The platform used for these tests is a 550MHz Athlon with 256MB or memory and 33MHz IDE disk drives. The operating system is RedHat Linux 6.0 with various upgrades, including an upgrade to kernel version 2.2.18.

PostgreSQL version 6.4.2 was used for these tests because that is what came pre-installed with RedHat 6.0. Newer version of PostgreSQL may give better performance.

SQLite version 1.0.32 was compiled with -O2 optimization and without the -DNDEBUG=1 switch. Setting the NDEBUG macro disables all "assert()" statements within the code, but SQLite version 1.0 does not have any expensive assert() statements so the difference in performance is negligible.

SQLite version 2.0-alpha-2 was compiled with -O2 optimization and with the -DNDEBUG=1 compiler switch. Setting the NDEBUG macro is very important in SQLite version 2.0. SQLite 2.0 contains some expensive "assert()" statements in the inner loop of its processing. Setting the NDEBUG macro makes SQLite 2.0 run nearly twice as fast.

All tests are conducted on an otherwise quiescent machine. A simple shell script was used to generate and run all the tests. Each test reports three different times:

  1. "Real" or wall-clock time.
  2. "User" time, the time spent executing user-level code.
  3. "Sys" or system time, the time spent in the operating system.

PostgreSQL uses a client-server model. The experiment is unable to measure CPU used by the server, only the client, so the "user" and "sys" numbers from PostgreSQL are meaningless.

Test 1: CREATE TABLE

CREATE TABLE t1(f1 int, f2 int, f3 int);
COPY t1 FROM '/home/drh/sqlite/bld/speeddata3.txt';

PostgreSQL:   real   1.84
SQLite 1.0:   real   3.29   user   0.64   sys   1.60
SQLite 2.0:   real   0.77   user   0.51   sys   0.05

The speeddata3.txt data file contains 30000 rows of data.

Test 2: SELECT

SELECT max(f2), min(f3), count(*) FROM t1
WHERE f3<10000 OR f1>=20000;

PostgreSQL:   real   1.22
SQLite 1.0:   real   0.80   user   0.67   sys   0.12
SQLite 2.0:   real   0.65   user   0.60   sys   0.05

With no indices, a complete scan of the table must be performed (all 30000 rows) in order to complete this query.

Test 3: CREATE INDEX

CREATE INDEX idx1 ON t1(f1);
CREATE INDEX idx2 ON t1(f2,f3);

PostgreSQL:   real   2.24
SQLite 1.0:   real   5.37   user   1.22   sys   3.10
SQLite 2.0:   real   3.71   user   2.31   sys   1.06

PostgreSQL is fastest at creating new indices. Note that SQLite 2.0 is faster than SQLite 1.0 but still spends longer in user-space code.

Test 4: SELECT using an index

SELECT max(f2), min(f3), count(*) FROM t1
WHERE f3<10000 OR f1>=20000;

PostgreSQL:   real   0.19
SQLite 1.0:   real   0.77   user   0.66   sys   0.12
SQLite 2.0:   real   0.62   user   0.62   sys   0.01

This is the same query as in Test 2, but now there are indices. Unfortunately, SQLite is reasonably simple-minded about its querying and not able to take advantage of the indices. It still does a linear scan of the entire table. PostgreSQL, on the other hand, is able to use the indices to make its query over six times faster.

Test 5: SELECT a single record

SELECT f2, f3 FROM t1 WHERE f1==1;
SELECT f2, f3 FROM t1 WHERE f1==2;
SELECT f2, f3 FROM t1 WHERE f1==3;
...
SELECT f2, f3 FROM t1 WHERE f1==998;
SELECT f2, f3 FROM t1 WHERE f1==999;
SELECT f2, f3 FROM t1 WHERE f1==1000;

PostgreSQL:   real   0.95
SQLite 1.0:   real  15.70   user   0.70   sys  14.41
SQLite 2.0:   real   0.20   user   0.15   sys   0.05

This test involves 1000 separate SELECT statements, only the first and last three of which are show above. SQLite 2.0 is the clear winner. The miserable showing by SQLite 1.0 is due (it is thought) to the high overhead of executing gdbm_open 2000 times in quick succession.

Test 6: UPDATE

UPDATE t1 SET f2=f3, f3=f2
WHERE f1 BETWEEN 15000 AND 20000;

PostgreSQL:   real   6.56
SQLite 1.0:   real   3.54   user   0.74   sys   1.16
SQLite 2.0:   real   2.70   user   0.70   sys   1.25

We have no explanation for why PostgreSQL does poorly here.

Test 7: INSERT from a SELECT

CREATE TABLE t2(f1 int, f2 int);
INSERT INTO t2 SELECT f1, f2 FROM t1 WHERE f3<10000;

PostgreSQL:   real   2.05
SQLite 1.0:   real   1.80   user   0.81   sys   0.73
SQLite 2.0:   real   0.69   user   0.58   sys   0.07

Test 8: Many small INSERTs

CREATE TABLE t3(f1 int, f2 int, f3 int);
INSERT INTO t3 VALUES(1,1641,1019);
INSERT INTO t3 VALUES(2,984,477);
...
INSERT INTO t3 VALUES(998,1411,1392);
INSERT INTO t3 VALUES(999,1715,526);
INSERT INTO t3 VALUES(1000,1906,1037);

PostgreSQL:   real   5.28
SQLite 1.0:   real   2.20   user   0.21   sys   0.67
SQLite 2.0:   real  10.99   user   0.21   sys   7.02

This test involves 1000 separate INSERT statements, only 5 of which are shown above. SQLite 2.0 does poorly because of its atomic commit logic. A minimum of two calls to fsync() are required for each INSERT statement, and that really slows things down. On the other hand, PostgreSQL also has to support atomic commits and it seems to do so efficiently.

Test 9: Many small INSERTs within a TRANSACTION

CREATE TABLE t4(f1 int, f2 int, f3 int);
BEGIN TRANSACTION;
INSERT INTO t4 VALUES(1,440,1084);
...
INSERT INTO t4 VALUES(999,1527,423);
INSERT INTO t4 VALUES(1000,74,1865);
COMMIT;

PostgreSQL:   real   0.68
SQLite 1.0:   real   1.72   user   0.09   sys   0.55
SQLite 2.0:   real   0.10   user   0.08   sys   0.02

By putting all the inserts inside a single transaction, there only needs to be a single atomic commit at the very end. This allows SQLite 2.0 to go (literally) 100 times faster! PostgreSQL only gets a eight-fold speedup. Perhaps PostgreSQL is limited here by the IPC overhead.

Test 10: DELETE

DELETE FROM t1 WHERE f2 NOT BETWEEN 10000 AND 20000;

PostgreSQL:   real   7.25
SQLite 1.0:   real   6.98   user   1.66   sys   4.11
SQLite 2.0:   real   5.89   user   1.35   sys   3.11

All three database run at about the same speed here.

Test 11: DROP TABLE

BEGIN TRANSACTION;
DROP TABLE t1; DROP TABLE t2;
DROP TABLE t3; DROP TABLE t4;
COMMIT;

PostgreSQL:   real   0.06
SQLite 1.0:   real   0.03   user   0.00   sys   0.02
SQLite 2.0:   real   3.12   user   0.02   sys   0.31

SQLite 2.0 is much slower at dropping tables. This may be because both SQLite 1.0 and PostgreSQL can drop a table simply by unlinking or renaming a file, since both store database tables in separate files. SQLite 2.0, on the other hand, uses a single file for the entire database, so dropping a table involves moving lots of page of that file to the free-list, which takes time.

} puts {


Back to the SQLite Home Page

}