NetBSD/regress
kardel e3768e33c9 - add check for kevent() timeouts
- use err() family for error reporting
2006-07-16 22:18:46 +00:00
..
bin Add a regression test for cp(1) to test simple copying of files, directories 2006-07-16 16:26:10 +00:00
games Add test cases for the algorithm fixes by Joseph Myers, from test cases 2004-02-08 13:43:48 +00:00
include Do not instal net/if_pppvar.h, net/if_slvar.h and net/if_stripvar.h. 2006-06-18 21:02:16 +00:00
lib Test program for some basic proplib functionality. Not hooked up to the 2006-05-28 03:57:57 +00:00
sys - add check for kevent() timeouts 2006-07-16 22:18:46 +00:00
usr.bin Regression test for big-regexp from Aleksey Cheusov 2006-07-04 03:02:21 +00:00
Makefile Descend into games. 2003-08-12 03:05:20 +00:00
README Let me play regression test dictator for just one day: document rules that 2006-03-14 09:46:34 +00:00

README

$NetBSD: README,v 1.1 2006/03/14 09:46:34 martin Exp $

This part of the source tree contains regression tests. There are special
make targets and rules to follow. Most of these, however, are currently not
enforced, and most tests available are not conforming.

We hope to fix this someday. If you add new tests, please try to be conforming.

What is a regression test?

  A regression test is run by a makefile in a test directory (see below).
  Each makefile may run multiple tests.

What is a test directory?

  A directory in this part of the tree is a regression test directory. It
  contains a Makefile which implements the additional "regress" target,
  and runs all it's regression tests during this target.

May the make progress be stopped on failures?

  No, the "make regress" target should succeed, unless some regression 
  binaries could not be build, disk is full or other catastrophic failures
  outside of the tested subsystem happen. A failing regression test should
  log the failure (see below), but not make the target itself fail.

What are the possible results of a regression test?

  A test may either

  - succeed, in which case it logs "PASSED" (see below for logging details)
  - fail, in which case it logs "FAILED"
  - not be able to run, in which case it logs "SKIPPED" and the reason
    for the skip in the comment field (see below)

  Typical reasons for tests to not being run are missing kernel options,
  or missing privileges (test needs root, "make regress" is invoked by
  mere mortal or vice versa). A test may not fail because of such
  environmental issues, it must detect and properly log the problem.

  If a test directory contains tests that may be skipped, it should have
  a README file explaining the prerequisites (e.g. needed kernel options)

  In future, we will mark affected makefiles and optimize run/skipped test
  during repeated runs with differing privileges - but currently there is no
  make framework in place to handle this.

How and when does a test log results?

  If the make/environment variable ${REGRESS_LOG} is defined, the final
  results (and only those) should be logged to the file named by that
  variable. We will, in the future, add make targets for this purpose.
  The log format is line oriented, one line used per test. Each line
  consists of the directory where the Makefile lives, followed by the
  test name and the result (see above: PASSED, FAILED, SKIPPED).
  Following this an optional comment may be added. For SKIPPED tests the
  comment is not optional. Fields are separated by spaces.