From 7f20a5920123066918a627029ad8ae7f7b55cf10 Mon Sep 17 00:00:00 2001
From: Bruce Momjian Last updated: Wed Dec 1 16:11:11 EST 2006 Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us) The most recent version of this document can be viewed at http://www.PostgreSQL.org/docs/faqs/FAQ_DEV.html. Last updated: Sat Nov 27 01:02:35 EST 2004 Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us) The most recent version of this document can be viewed at http://www.PostgreSQL.org/docs/faqs/FAQ_DEV.html. Download the code and have a look around. See 1.7. Subscribe to and read the pgsql-hackers
+mailing list (often termed 'hackers'). This is where the major
+contributors and core members of the project discuss
+development. PostgreSQL is developed mostly in the C programming language. It
+also makes use of Yacc and Lex. This was written by Lamar Owen: The source code is targeted at most of the popular Unix
+platforms and the Windows environment (XP, Windows 2000, and
+up). 2001-06-22 Most developers make use of the open source development tool
+chain. If you have contributed to open source software before, you
+will probably be familiar with these tools. They include: GCC (http://gcc.gnu.org, GDB (www.gnu.org/software/gdb/gdb.html),
+autoconf (www.gnu.org/software/autoconf/)
+AND GNU make (www.gnu.org/software/make/make.html. Developers using this tool chain on Windows make use of MingW
+(see http://www.mingw.org/). Read HACKERS for six months (or a full release cycle, whichever
- is longer). Really. HACKERS _is_the process. The process is not
- well documented (AFAIK -- it may be somewhere that I am not aware
- of) -- and it changes continually. Some developers use compilers from other software vendors with
+mixed results. Developers who are regularly rebuilding the source often pass
+the --enable-depend flag to configure. The result is that
+when you make a modification to a C header file, all files depend
+upon that file are also rebuilt. Developers Corner on the
- website has links to this information. The distribution tarball
- itself includes all the extra tools and documents that go beyond a
- good Unix-like development environment. In general, a modern unix
- with a modern gcc, GNU make or equivalent, autoconf (of a
- particular version), and good working knowledge of those tools are
- required. The TODO list. You can learn more about these features by consulting the
+archives, the SQL standards and the recommend texts (see 1.10). You've made the first step, by finding and subscribing to
- HACKERS. Once you find an area to look at in the TODO, and have
- read the documentation on the internals, etc, then you check out a
- current CVS,write what you are going to write (keeping your CVS
- checkout up to date in the process), and make up a patch (as a
- context diff only) and send to the PATCHES list, prefereably. Discussion on the patch typically happens here. If the patch
- adds a major feature, it would be a good idea to talk about it
- first on the HACKERS list, in order to increase the chances of it
- being accepted, as well as toavoid duplication of effort. Note that
- experienced developers with a proven track record usually get the
- big jobs -- for more than one reason. Also note that PostgreSQL is
- highly portable -- nonportable code will likely be dismissed out of
- hand. Send an email to pgsql-hackers with a proposal for what you want
+to do (assuming your contribution is not trivial). Working in
+isolation is not advisable: others may be working on the same TODO
+item; you may have misunderstood the TODO item; your approach may
+benefit from the review of others. Once your contributions get accepted, things move from there.
- Typically, you would be added as a developer on the list on the
- website when one of the other developers recommends it. Membership
- on the steering committee is by invitation only, by the other
- steering committee members, from what I have gathered watching
- froma distance. I make these statements from having watched the process for over
- two years. Other than documentation in the source tree itself, you can find
+some papers/presentations discussing the code at http://developers.postgresql.org. To see a good example of how one goes about this, search the
- archives for the name 'Tom Lane' and see what his first post
- consisted of, and where he took things. In particular, note that
- this hasn't been _that_ long ago -- and his bugfixing and general
- deep knowledge with this codebase is legendary. Take a few days to
- read after him. And pay special attention to both the sheer
- quantity as well as the painstaking quality of his work. Both are
- in high demand. Generate the patch in contextual diff format. If you are
+unfamiliar with this, you may find the script
+src/tools/makediff/difforig useful. The source code is over 350,000 lines. Many fixes/features
- are isolated to one specific area of the code. Others require
- knowledge of much of the source. If you are confused about where to
- start, ask the hackers list, and they will be glad to assess the
- complexity and give pointers on where to start. Ensure that your patch is generated against the most recent
+version of the code. If it is a patch adding new functionality, the
+most recent version is cvs HEAD; if it is a bug fix, this will be
+the most recently version of the branch which suffers from the bug
+(for more on branches in PostgreSQL, see 1.15). Another thing to keep in mind is that many fixes and features
- can be added with surprisingly little code. I often start by adding
- code, then looking at other areas in the code where similar things
- are done, and by the time I am finished, the patch is quite small
- and compact. Finally, submit the patch to pgsql-patches@postgresql.org. It
+will be reviewed by other contributors to the project and may be
+either accepted or sent back for further work. When adding code, keep in mind that it should use the existing
- facilities in the source, for performance reasons and for
- simplicity. Often a review of existing code doing similar things is
- helpful. The usual process for source additions is:
- Developer's Frequently Asked Questions (FAQ) for
+PostgreSQL
-
-
-
+
+Developer's Frequently Asked Questions (FAQ) for
- PostgreSQL
+
+
+
+General Questions
+
-
+ 1.2) What development environment is required
+to develop code?
+ 1.3) What areas need work?
+ 1.4) What do I do after choosing an item to
+work on?
+ 1.5) Where can I learn more about the code?
+ 1.6) I've developed a patch, what next?
+ 1.7) How do I download/update the current
+source tree?
+ 1.8) How do I test my changes?
+ 1.9) What tools are available for
+developers?
+ 1.10) What books are good for developers?
+ 1.11) What is configure all about?
+ 1.12) How do I add a new port?
+ 1.13) Why don't you use threads/raw
+devices/async-I/O, <insert your favorite wizz-bang feature
+here>?
+ 1.14) How are RPM's packaged?
+ 1.15) How are CVS branches handled?
+ 1.16) Where can I get a copy of the SQL
+standards?
+ 1.17) Where can I get technical
+assistance?
+ 1.18) How do I get involved in PostgreSQL web
+site development?
+
+Technical Questions
+
+ 2.2) Why are table, column, type, function,
+view names sometimes referenced as Name or NameData,
+and sometimes as char *?
+ 2.3) Why do we use Node and List
+to make data structures?
+ 2.4) I just added a field to a structure. What
+else should I do?
+ 2.5) Why do we use palloc() and
+pfree() to allocate memory?
+ 2.6) What is ereport()?
+ 2.7) What is CommandCounterIncrement()?
+
+
+
+General Questions
+
-
-
+1.1) How go I get involved in PostgreSQL
+development?
- General Questions
-
- 1.2) How do I add a feature or fix a bug?
- 1.3) How do I download/update the current source
- tree?
- 1.4) How do I test my changes?
- 1.5) What tools are available for developers?
- 1.6) What books are good for developers?
- 1.7) What is configure all about?
- 1.8) How do I add a new port?
- 1.9) Why don't you use threads/raw
- devices/async-I/O, <insert your favorite wizz-bang feature
- here>?
- 1.10) How are RPM's packaged?
- 1.11) How are CVS branches handled?
- 1.12) Where can I get a copy of the SQL
- standards?
- 1.1) How do I get involved in PostgreSQL
- web site development?
+Technical Questions
-
- 2.2) Why are table, column, type, function, view
- names sometimes referenced as Name or NameData, and
- sometimes as char *?
- 2.3) Why do we use Node and List to
- make data structures?
- 2.4) I just added a field to a structure. What else
- should I do?
- 2.5) Why do we use palloc() and
- pfree() to allocate memory?
- 2.6) What is ereport()?
- 2.7) What is CommandCounterIncrement()?
-
-
-
+General Questions
- 1.2) What development environment is required
+to develop code?
- 1.1) How go I get involved in PostgreSQL
- development?
+1.3) What areas need work?
- What areas need support?
+Outstanding features are detailed in the TODO list. This is located
+in doc/TODO in the source distribution or at http://developer.postgresql.org/todo.php.
- 1.4) What do I do after choosing an item to
+work on?
- 1.5) Where can I learn more about the
+code?
- 1.6) I've developed a patch, what next?
- 1.2) How do I add a feature or fix a bug?
+1.7) How do I download/update the current
+source tree?
-
-
There are several ways to obtain the source tree. Occasional +developers can just get the most recent source tree snapshot from +ftp://ftp.postgresql.org.
-There are several ways to obtain the source tree. Occasional - developers can just get the most recent source tree snapshot from - ftp.postgresql.org. For regular developers, you can use CVS. CVS - allows you to download the source tree, then occasionally update - your copy of the source tree with any new changes. Using CVS, you - don't have to download the entire source each time, only the - changed files. Anonymous CVS does not allows developers to update - the remote source tree, though privileged developers can do this. - There is a CVS section (http://developer.postgresql.org/docs/postgres/cvs.html) - in our documentation that describes how to use remote CVS. You can - also use CVSup, which has similarly functionality, and is available - from ftp.postgresql.org.
+Regular developers may want to take advantage of anonymous +access to our source code management system. The source tree is +currently hosted in CVS. For details of how to obtain the source +from CVS see http://developer.postgresql.org/docs/postgres/cvs.html.
-To update the source tree, there are two ways. You can generate - a patch against your current source tree, perhaps using the - make_diff tools mentioned above, and send them to the patches list. - They will be reviewed, and applied in a timely manner. If the patch - is major, and we are in beta testing, the developers may wait for - the final release before applying your patches.
+For hard-core developers, Marc(scrappy@postgresql.org) will give - you a Unix shell account on postgresql.org, so you can use CVS to - update the main source tree, or you can ftp your files into your - account, patch, and cvs install the changes directly into the - source tree.
+Basic system testing
-The easiest way to test your code is to ensure that it builds +against the latest verion of the code and that it does not generate +compiler warnings.
-First, use psql to make sure it is working as you expect. - Then run src/test/regress and get the output of - src/test/regress/checkresults with and without your changes, - to see that your patch does not change the regression test in - unexpected ways. This practice has saved me many times. The - regression tests test the code in ways I would never do, and has - caught many bugs in my patches. By finding the problems now, you - save yourself a lot of debugging later when things are broken, and - you can't figure out when it happened.
+It is worth advised that you pass --enable-cassert to +configure. This will turn on assertions with in the source +which will often show us bugs because they cause data corruption of +segmentation violations. This generally makes debugging much +easier.
-Then, perform run time testing via psql.
-Aside from the User documentation mentioned in the regular FAQ, - there are several development tools available. First, all the files - in the /tools directory are designed for developers.
-+-Regression test suite
+ +The next step is to test your changes against the existing +regression test suite. To do this, issue "make check" in the root +directory of the source tree. If any tests failure, +investigate.
+ +If you've deliberately changed existing behaviour, this change +may cause a regression test failure but not any actual regression. +If so, you should also patch the regression test suite.
+ +Other run time testing
+ +Some developers make use of tools such as valgrind (http://valgrind.kde.org) for memory +testing, gprof (which comes with the GNU binutils suite) and +oprofile (http://oprofile.sourceforge.net/) +for profiling and other related tools.
+ +What about unit testing, static analysis, model +checking...?
+ +There have been a number of discussions about other testing +frameworks and some developers are exploring these ideas.
+ +1.9) What tools are available for +developers?
+ +First, all the files in the src/tools directory are +designed for developers.
+ +RELEASE_CHANGES changes we have to make for each release - SQL_keywords standard SQL'92 keywords backend description/flowchart of the backend directories ccsym find standard defines made by your compiler + copyright fixes copyright notices + entab converts tabs to spaces, used by pgindent find_static finds functions that could be made static find_typedef finds typedefs in the source code find_badmacros finds macros that use braces incorrectly + fsync a script to provide information about the cost of cache + syncing system calls make_ctags make vi 'tags' file in each directory make_diff make *.orig and diffs of source make_etags make emacs 'etags' files make_keywords make comparison of our keywords and SQL'92 make_mkid make mkid ID files - mkldexport create AIX exports file - pgindent indents C source files - pgjindent indents Java source files + pgcvslog used to generate a list of changes for each release pginclude scripts for adding/removing include files - unused_oids in pgsql/src/include/catalog -- Let me note some of these. If you point your browser at the - file:/usr/local/src/pgsql/src/tools/backend/index.html - directory, you will see few paragraphs describing the data flow, - the backend components in a flow chart, and a description of the - shared memory area. You can click on any flowchart box to see a - description. If you then click on the directory name, you will be - taken to the source directory, to browse the actual source code - behind it. We also have several README files in some source - directories to describe the function of the module. The browser - will display these when you enter the directory also. The - tools/backend directory is also contained on our web page - under the title How PostgreSQL Processes a Query. + pgindent indents source files + pgtest a semi-automated build system + thread a thread testing script +
Second, you really should have an editor that can handle tags, - so you can tag a function call to see the function definition, and - then tag inside that function to see an even lower-level function, - and then back out twice to return to the original function. Most - editors support this via tags or etags files.
+In src/include/catalog:
-Third, you need to get id-utils from:
-- ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz - ftp://tug.org/gnu/id-utils-3.2d.tar.gz - ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz -- By running tools/make_mkid, an archive of source symbols can - be created that can be rapidly queried like grep or edited. - Others prefer glimpse. +
+ unused_oids a script which generates unused OIDs for use in system + catalogs + duplicate_oids finds duplicate OIDs in system catalog definitions +-
make_diff has tools to create patch diff files that can - be applied to the distribution. This produces context diffs, which - is our preferred format.
+If you point your browser at the tools/backend/index.html +file, you will see few paragraphs describing the data flow, the +backend components in a flow chart, and a description of the shared +memory area. You can click on any flowchart box to see a +description. If you then click on the directory name, you will be +taken to the source directory, to browse the actual source code +behind it. We also have several README files in some source +directories to describe the function of the module. The browser +will display these when you enter the directory also. The +tools/backend directory is also contained on our web page +under the title How PostgreSQL Processes a Query. +Second, you really should have an editor that can handle tags, +so you can tag a function call to see the function definition, and +then tag inside that function to see an even lower-level function, +and then back out twice to return to the original function. Most +editors support this via tags or etags files.
-Our standard format is to indent each code level with one tab,
- where each tab is four spaces. You will need to set your editor to
- display tabs as four spaces:
-
+-Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/id-utils/
+ +By running tools/make_mkid, an archive of source symbols +can be created that can be rapidly queried.
+ +Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use +glimpse, which can be found at http://webglimpse.net/.
+ +tools/make_diff has tools to create patch diff files that +can be applied to the distribution. This produces context diffs, +which is our preferred format.
+ +Our standard format is to indent each code level with one tab, +where each tab is four spaces. You will need to set your editor to +display tabs as four spaces:
+ +
+vi in ~/.exrc: set tabstop=4 set sw=4 @@ -286,26 +311,26 @@ or - (c-add-style "pgsql" - '("bsd" - (indent-tabs-mode . t) - (c-basic-offset . 4) - (tab-width . 4) - (c-offsets-alist . - ((case-label . +))) - ) - nil ) ; t = set this style, nil = don't + (c-add-style "pgsql" + '("bsd" + (indent-tabs-mode . t) + (c-basic-offset . 4) + (tab-width . 4) + (c-offsets-alist . + ((case-label . +))) + ) + nil ) ; t = set this style, nil = don't - (defun pgsql-c-mode () - (c-mode) - (c-set-style "pgsql") - ) + (defun pgsql-c-mode () + (c-mode) + (c-set-style "pgsql") + ) and add this to your autoload list (modify file path in macro): - (setq auto-mode-alist - (cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode) - auto-mode-alist)) + (setq auto-mode-alist + (cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode) + auto-mode-alist)) or /* * Local variables: @@ -314,473 +339,498 @@ * c-basic-offset: 4 * End: */ --
- pgindent will the format code by specifying flags to your - operating system's utility indent. This - - article describes the value of a constent coding style. +
pgindent is run on all source files just before each beta
- test period. It auto-formats all source files to make them
- consistent. Comment blocks that need specific line breaks should be
- formatted as block comments, where the comment starts as
- /*------
. These comments will not be reformatted in
- any way.
pgindent is run on all source files just before each beta
+test period. It auto-formats all source files to make them
+consistent. Comment blocks that need specific line breaks should be
+formatted as block comments, where the comment starts as
+/*------
. These comments will not be reformatted in
+any way.
pginclude contains scripts used to add needed
- #include
's to include files, and removed unneeded
- #include
's.
pginclude contains scripts used to add needed
+#include
's to include files, and removed unneeded
+#include
's.
When adding system types, you will need to assign oids to them. - There is also a script called unused_oids in - pgsql/src/include/catalog that shows the unused oids.
+When adding system types, you will need to assign oids to them. +There is also a script called unused_oids in +pgsql/src/include/catalog that shows the unused oids.
-I have four good books, An Introduction to Database - Systems, by C.J. Date, Addison, Wesley, A Guide to the SQL - Standard, by C.J. Date, et. al, Addison, Wesley, - Fundamentals of Database Systems, by Elmasri and Navathe, - and Transaction Processing, by Jim Gray, Morgan, - Kaufmann
+I have four good books, An Introduction to Database +Systems, by C.J. Date, Addison, Wesley, A Guide to the SQL +Standard, by C.J. Date, et. al, Addison, Wesley, +Fundamentals of Database Systems, by Elmasri and Navathe, +and Transaction Processing, by Jim Gray, Morgan, +Kaufmann
-There is also a database performance site, with a handbook - on-line written by Jim Gray at http://www.benchmarkresources.com.
+There is also a database performance site, with a handbook +on-line written by Jim Gray at http://www.benchmarkresources.com..
-The files configure and configure.in are part of - the GNU autoconf package. Configure allows us to test for - various capabilities of the OS, and to set variables that can then - be tested in C programs and Makefiles. Autoconf is installed on the - PostgreSQL main server. To add options to configure, edit - configure.in, and then run autoconf to generate - configure.
+The files configure and configure.in are part of +the GNU autoconf package. Configure allows us to test for +various capabilities of the OS, and to set variables that can then +be tested in C programs and Makefiles. Autoconf is installed on the +PostgreSQL main server. To add options to configure, edit +configure.in, and then run autoconf to generate +configure.
-When configure is run by the user, it tests various OS - capabilities, stores those in config.status and - config.cache, and modifies a list of *.in files. For - example, if there exists a Makefile.in, configure generates - a Makefile that contains substitutions for all @var@ - parameters found by configure.
+When configure is run by the user, it tests various OS +capabilities, stores those in config.status and +config.cache, and modifies a list of *.in files. For +example, if there exists a Makefile.in, configure generates +a Makefile that contains substitutions for all @var@ +parameters found by configure.
-When you need to edit files, make sure you don't waste time - modifying files generated by configure. Edit the *.in - file, and re-run configure to recreate the needed file. If - you run make distclean from the top-level source directory, - all files derived by configure are removed, so you see only the - file contained in the source distribution.
+When you need to edit files, make sure you don't waste time +modifying files generated by configure. Edit the *.in +file, and re-run configure to recreate the needed file. If +you run make distclean from the top-level source directory, +all files derived by configure are removed, so you see only the +file contained in the source distribution.
-There are a variety of places that need to be modified to add a - new port. First, start in the src/template directory. Add an - appropriate entry for your OS. Also, use src/config.guess to - add your OS to src/template/.similar. You shouldn't match - the OS version exactly. The configure test will look for an - exact OS version number, and if not found, find a match without - version number. Edit src/configure.in to add your new OS. - (See configure item above.) You will need to run autoconf, or patch - src/configure too.
+There are a variety of places that need to be modified to add a +new port. First, start in the src/template directory. Add an +appropriate entry for your OS. Also, use src/config.guess to +add your OS to src/template/.similar. You shouldn't match +the OS version exactly. The configure test will look for an +exact OS version number, and if not found, find a match without +version number. Edit src/configure.in to add your new OS. +(See configure item above.) You will need to run autoconf, or patch +src/configure too.
-Then, check src/include/port and add your new OS file, - with appropriate values. Hopefully, there is already locking code - in src/include/storage/s_lock.h for your CPU. There is also - a src/makefiles directory for port-specific Makefile - handling. There is a backend/port directory if you need - special files for your OS.
+Then, check src/include/port and add your new OS file, +with appropriate values. Hopefully, there is already locking code +in src/include/storage/s_lock.h for your CPU. There is also +a src/makefiles directory for port-specific Makefile +handling. There is a backend/port directory if you need +special files for your OS.
-There is always a temptation to use the newest operating system - features as soon as they arrive. We resist that temptation.
+There is always a temptation to use the newest operating system +features as soon as they arrive. We resist that temptation.
-First, we support 15+ operating systems, so any new feature has - to be well established before we will consider it. Second, most new - wizz-bang features don't provide dramatic - improvements. Third, they usually have some downside, such as - decreased reliability or additional code required. Therefore, we - don't rush to use new features but rather wait for the feature to be - established, then ask for testing to show that a measurable - improvement is possible.
+First, we support 15+ operating systems, so any new feature has +to be well established before we will consider it. Second, most new +wizz-bang features don't provide dramatic +improvements. Third, they usually have some downside, such as +decreased reliability or additional code required. Therefore, we +don't rush to use new features but rather wait for the feature to +be established, then ask for testing to show that a measurable +improvement is possible.
-As an example, threads are not currently used in the backend code - because:
+As an example, threads are not currently used in the backend +code because:
-So, we are not ignorant of new features. It is just that - we are cautious about their adoption. The TODO list often - contains links to discussions showing our reasoning in - these areas.
+So, we are not ignorant of new features. It is just that we are +cautious about their adoption. The TODO list often contains links +to discussions showing our reasoning in these areas.
-This was written by Lamar Owen:
+This was written by Lamar Owen:
-2001-05-03
+2001-05-03
-As to how the RPMs are built -- to answer that question sanely - requires me to know how much experience you have with the whole RPM - paradigm. 'How is the RPM built?' is a multifaceted question. The - obvious simple answer is that I maintain:
+As to how the RPMs are built -- to answer that question sanely +requires me to know how much experience you have with the whole RPM +paradigm. 'How is the RPM built?' is a multifaceted question. The +obvious simple answer is that I maintain:
-I then download and build on as many different canonical - distributions as I can -- currently I am able to build on Red Hat - 6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive - opportunity from certain commercial enterprises such as Great - Bridge and PostgreSQL, Inc. to build on other distributions.
+I then download and build on as many different canonical +distributions as I can -- currently I am able to build on Red Hat +6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive +opportunity from certain commercial enterprises such as Great +Bridge and PostgreSQL, Inc. to build on other distributions.
-I test the build by installing the resulting packages and - running the regression tests. Once the build passes these tests, I - upload to the postgresql.org ftp server and make a release - announcement. I am also responsible for maintaining the RPM - download area on the ftp site.
+I test the build by installing the resulting packages and +running the regression tests. Once the build passes these tests, I +upload to the postgresql.org ftp server and make a release +announcement. I am also responsible for maintaining the RPM +download area on the ftp site.
-You'll notice I said 'canonical' distributions above. That - simply means that the machine is as stock 'out of the box' as - practical -- that is, everything (except select few programs) on - these boxen are installed by RPM; only official Red Hat released - RPMs are used (except in unusual circumstances involving software - that will not alter the build -- for example, installing a newer - non-RedHat version of the Dia diagramming package is OK -- - installing Python 2.1 on the box that has Python 1.5.2 installed is - not, as that alters the PostgreSQL build). The RPM as uploaded is - built to as close to out-of-the-box pristine as is possible. Only - the standard released 'official to that release' compiler is used - -- and only the standard official kernel is used as well.
+You'll notice I said 'canonical' distributions above. That +simply means that the machine is as stock 'out of the box' as +practical -- that is, everything (except select few programs) on +these boxen are installed by RPM; only official Red Hat released +RPMs are used (except in unusual circumstances involving software +that will not alter the build -- for example, installing a newer +non-RedHat version of the Dia diagramming package is OK -- +installing Python 2.1 on the box that has Python 1.5.2 installed is +not, as that alters the PostgreSQL build). The RPM as uploaded is +built to as close to out-of-the-box pristine as is possible. Only +the standard released 'official to that release' compiler is used +-- and only the standard official kernel is used as well.
-For a time I built on Mandrake for RedHat consumption -- no - more. Nonstandard RPM building systems are worse than useless. - Which is not to say that Mandrake is useless! By no means is - Mandrake useless -- unless you are building Red Hat RPMs -- and Red - Hat is useless if you're trying to build Mandrake or SuSE RPMs, for - that matter. But I would be foolish to use 'Lamar Owen's Super - Special RPM Blend Distro 0.1.2' to build for public consumption! - :-)
+For a time I built on Mandrake for RedHat consumption -- no +more. Nonstandard RPM building systems are worse than useless. +Which is not to say that Mandrake is useless! By no means is +Mandrake useless -- unless you are building Red Hat RPMs -- and Red +Hat is useless if you're trying to build Mandrake or SuSE RPMs, for +that matter. But I would be foolish to use 'Lamar Owen's Super +Special RPM Blend Distro 0.1.2' to build for public consumption! +:-)
-I _do_ attempt to make the _source_ RPM compatible with as many - distributions as possible -- however, since I have limited - resources (as a volunteer RPM maintainer) I am limited as to the - amount of testing said build will get on other distributions, - architectures, or systems.
+I _do_ attempt to make the _source_ RPM compatible with as many +distributions as possible -- however, since I have limited +resources (as a volunteer RPM maintainer) I am limited as to the +amount of testing said build will get on other distributions, +architectures, or systems.
-And, while I understand people's desire to immediately upgrade - to the newest version, realize that I do this as a side interest -- - I have a regular, full-time job as a broadcast - engineer/webmaster/sysadmin/Technical Director which occasionally - prevents me from making timely RPM releases. This happened during - the early part of the 7.1 beta cycle -- but I believe I was pretty - much on the ball for the Release Candidates and the final - release.
+And, while I understand people's desire to immediately upgrade +to the newest version, realize that I do this as a side interest -- +I have a regular, full-time job as a broadcast +engineer/webmaster/sysadmin/Technical Director which occasionally +prevents me from making timely RPM releases. This happened during +the early part of the 7.1 beta cycle -- but I believe I was pretty +much on the ball for the Release Candidates and the final +release.
-I am working towards a more open RPM distribution -- I would - dearly love to more fully document the process and put everything - into CVS -- once I figure out how I want to represent things such - as the spec file in a CVS form. It makes no sense to maintain a - changelog, for instance, in the spec file in CVS when CVS does a - better job of changelogs -- I will need to write a tool to generate - a real spec file from a CVS spec-source file that would add version - numbers, changelog entries, etc to the result before building the - RPM. IOW, I need to rethink the process -- and then go through the - motions of putting my long RPM history into CVS one version at a - time so that version history information isn't lost.
+I am working towards a more open RPM distribution -- I would +dearly love to more fully document the process and put everything +into CVS -- once I figure out how I want to represent things such +as the spec file in a CVS form. It makes no sense to maintain a +changelog, for instance, in the spec file in CVS when CVS does a +better job of changelogs -- I will need to write a tool to generate +a real spec file from a CVS spec-source file that would add version +numbers, changelog entries, etc to the result before building the +RPM. IOW, I need to rethink the process -- and then go through the +motions of putting my long RPM history into CVS one version at a +time so that version history information isn't lost.
-As to why all these files aren't part of the source tree, well, - unless there was a large cry for it to happen, I don't believe it - should. PostgreSQL is very platform-agnostic -- and I like that. - Including the RPM stuff as part of the Official Tarball (TM) would, - IMHO, slant that agnostic stance in a negative way. But maybe I'm - too sensitive to that. I'm not opposed to doing that if that is the - consensus of the core group -- and that would be a sneaky way to - get the stuff into CVS :-). But if the core group isn't thrilled - with the idea (and my instinct says they're not likely to be), I am - opposed to the idea -- not to keep the stuff to myself, but to not - hinder the platform-neutral stance. IMHO, of course.
+As to why all these files aren't part of the source tree, well, +unless there was a large cry for it to happen, I don't believe it +should. PostgreSQL is very platform-agnostic -- and I like that. +Including the RPM stuff as part of the Official Tarball (TM) would, +IMHO, slant that agnostic stance in a negative way. But maybe I'm +too sensitive to that. I'm not opposed to doing that if that is the +consensus of the core group -- and that would be a sneaky way to +get the stuff into CVS :-). But if the core group isn't thrilled +with the idea (and my instinct says they're not likely to be), I am +opposed to the idea -- not to keep the stuff to myself, but to not +hinder the platform-neutral stance. IMHO, of course.
-Of course, there are many projects that DO include all the files - necessary to build RPMs from their Official Tarball (TM).
+Of course, there are many projects that DO include all the files +necessary to build RPMs from their Official Tarball (TM).
-This was written by Tom Lane:
+This was written by Tom Lane:
-2001-05-07
+2001-05-07
-If you just do basic "cvs checkout", "cvs update", "cvs commit", - then you'll always be dealing with the HEAD version of the files in - CVS. That's what you want for development, but if you need to patch - past stable releases then you have to be able to access and update - the "branch" portions of our CVS repository. We normally fork off a - branch for a stable release just before starting the development - cycle for the next release.
+If you just do basic "cvs checkout", "cvs update", "cvs commit", +then you'll always be dealing with the HEAD version of the files in +CVS. That's what you want for development, but if you need to patch +past stable releases then you have to be able to access and update +the "branch" portions of our CVS repository. We normally fork off a +branch for a stable release just before starting the development +cycle for the next release.
-The first thing you have to know is the branch name for the - branch you are interested in getting at. To do this, look at some - long-lived file, say the top-level HISTORY file, with "cvs status - -v" to see what the branch names are. (Thanks to Ian Lance Taylor - for pointing out that this is the easiest way to do it.) Typical - branch names are:
-+-The first thing you have to know is the branch name for the +branch you are interested in getting at. To do this, look at some +long-lived file, say the top-level HISTORY file, with "cvs status +-v" to see what the branch names are. (Thanks to Ian Lance Taylor +for pointing out that this is the easiest way to do it.) Typical +branch names are:
+ +REL7_1_STABLE REL7_0_PATCHES REL6_5_PATCHES -+
OK, so how do you do work on a branch? By far the best way is to - create a separate checkout tree for the branch and do your work in - that. Not only is that the easiest way to deal with CVS, but you - really need to have the whole past tree available anyway to test - your work. (And you *better* test your work. Never forget that - dot-releases tend to go out with very little beta testing --- so - whenever you commit an update to a stable branch, you'd better be - doubly sure that it's correct.)
+OK, so how do you do work on a branch? By far the best way is to +create a separate checkout tree for the branch and do your work in +that. Not only is that the easiest way to deal with CVS, but you +really need to have the whole past tree available anyway to test +your work. (And you *better* test your work. Never forget that +dot-releases tend to go out with very little beta testing --- so +whenever you commit an update to a stable branch, you'd better be +doubly sure that it's correct.)
-Normally, to checkout the head branch, you just cd to the place - you want to contain the toplevel "pgsql" directory and say
-+-Normally, to checkout the head branch, you just cd to the place +you want to contain the toplevel "pgsql" directory and say
+ +cvs ... checkout pgsql -+
To get a past branch, you cd to whereever you want it and - say
-+-To get a past branch, you cd to whereever you want it and +say
+ +cvs ... checkout -r BRANCHNAME pgsql -+
For example, just a couple days ago I did
-+-For example, just a couple days ago I did
+ +mkdir ~postgres/REL7_1 cd ~postgres/REL7_1 cvs ... checkout -r REL7_1_STABLE pgsql -+
and now I have a maintenance copy of 7.1.*.
+and now I have a maintenance copy of 7.1.*.
-When you've done a checkout in this way, the branch name is - "sticky": CVS automatically knows that this directory tree is for - the branch, and whenever you do "cvs update" or "cvs commit" in - this tree, you'll fetch or store the latest version in the branch, - not the head version. Easy as can be.
+When you've done a checkout in this way, the branch name is +"sticky": CVS automatically knows that this directory tree is for +the branch, and whenever you do "cvs update" or "cvs commit" in +this tree, you'll fetch or store the latest version in the branch, +not the head version. Easy as can be.
-So, if you have a patch that needs to apply to both the head and - a recent stable branch, you have to make the edits and do the - commit twice, once in your development tree and once in your stable - branch tree. This is kind of a pain, which is why we don't normally - fork the tree right away after a major release --- we wait for a - dot-release or two, so that we won't have to double-patch the first - wave of fixes.
+So, if you have a patch that needs to apply to both the head and +a recent stable branch, you have to make the edits and do the +commit twice, once in your development tree and once in your stable +branch tree. This is kind of a pain, which is why we don't normally +fork the tree right away after a major release --- we wait for a +dot-release or two, so that we won't have to double-patch the first +wave of fixes.
-There are three versions of the SQL standard: SQL-92, SQL:1999, - and SQL:2003. They are endorsed by ANSI and ISO. Draft versions - can be downloaded from: -
There are three versions of the SQL standard: SQL-92, SQL:1999, +and SQL:2003. They are endorsed by ANSI and ISO. Draft versions can +be downloaded from:
-Some SQL standards web pages are: -
PostgreSQL website development is discussed on the - pgsql-www@postgresql.org mailing list. The is a project page where - the source code is available at http://gborg.postgresql.org/project/pgweb/projdisplay.php - , the code for the next version of the website is under the "portal" - module. You will al so find code for the "techdocs" website if you would - like to contribute to that. A temporary todo list for current website - development issues is available at http://xzilla.postgresql.org/todo
+Some SQL standards web pages are:
-You first need to find the tuples(rows) you are interested in. - There are two ways. First, SearchSysCache() and related - functions allow you to query the system catalogs. This is the - preferred way to access system tables, because the first call to - the cache loads the needed rows, and future requests can return the - results without accessing the base table. The caches use system - table indexes to look up tuples. A list of available caches is - located in src/backend/utils/cache/syscache.c. - src/backend/utils/cache/lsyscache.c contains many - column-specific cache lookup functions.
+The rows returned are cache-owned versions of the heap rows. - Therefore, you must not modify or delete the tuple returned by - SearchSysCache(). What you should do is release it - with ReleaseSysCache() when you are done using it; this - informs the cache that it can discard that tuple if necessary. If - you neglect to call ReleaseSysCache(), then the cache entry - will remain locked in the cache until end of transaction, which is - tolerable but not very desirable.
+If you can't use the system cache, you will need to retrieve the - data directly from the heap table, using the buffer cache that is - shared by all backends. The backend automatically takes care of - loading the rows into the buffer cache.
+Open the table with heap_open(). You can then start a - table scan with heap_beginscan(), then use - heap_getnext() and continue as long as - HeapTupleIsValid() returns true. Then do a - heap_endscan(). Keys can be assigned to the - scan. No indexes are used, so all rows are going to be - compared to the keys, and only the valid rows returned.
+You can also use heap_fetch() to fetch rows by block - number/offset. While scans automatically lock/unlock rows from the - buffer cache, with heap_fetch(), you must pass a - Buffer pointer, and ReleaseBuffer() it when - completed.
+Once you have the row, you can get data that is common to all - tuples, like t_self and t_oid, by merely accessing - the HeapTuple structure entries. If you need a - table-specific column, you should take the HeapTuple pointer, and - use the GETSTRUCT() macro to access the table-specific start - of the tuple. You then cast the pointer as a Form_pg_proc - pointer if you are accessing the pg_proc table, or - Form_pg_type if you are accessing pg_type. You can then - access the columns by using a structure pointer:
-
-((Form_pg_class) GETSTRUCT(tuple))->relnatts
-
-
- You must not directly change live tuples in this way. The
- best way is to use heap_modifytuple() and pass it your
- original tuple, and the values you want changed. It returns a
- palloc'ed tuple, which you pass to heap_replace(). You can
- delete tuples by passing the tuple's t_self to
- heap_destroy(). You use t_self for
- heap_update() too. Remember, tuples can be either system
- cache copies, which may go away after you call
- ReleaseSysCache(), or read directly from disk buffers, which
- go away when you heap_getnext(), heap_endscan, or
- ReleaseBuffer(), in the heap_fetch() case. Or it may
- be a palloc'ed tuple, that you must pfree() when finished.
+Many technical questions held by those new to the code have been +answered on the pgsql-hackers mailing list - the archives of which +can be found at http://archives.postgresql.org/pgsql-hackers/.
-If you cannot find discussion or your particular question, feel +free to put it to the list.
-Table, column, type, function, and view names are stored in - system tables in columns of type Name. Name is a - fixed-length, null-terminated type of NAMEDATALEN bytes. - (The default value for NAMEDATALEN is 64 bytes.)
-
-typedef struct nameData
+Major contributors also answer technical questions, including
+questions about development of new features, on IRC at
+irc.freenode.net in the #postgresql channel.
+
+1.18) How go I get involved in PostgreSQL
+web site development?
+
+PostgreSQL website development is discussed on the
+pgsql-www@postgresql.org mailing list. The is a project page where
+the source code is available at http://gborg.postgresql.org/project/pgweb/projdisplay.php
+, the code for the next version of the website is under the
+"portal" module. You will al so find code for the "techdocs"
+website if you would like to contribute to that. A temporary todo
+list for current website development issues is available at http://xzilla.postgresql.org/todo
+
+
+Technical Questions
+
+
+2.1) How do I efficiently access information
+in tables from the backend code?
+
+You first need to find the tuples(rows) you are interested in.
+There are two ways. First, SearchSysCache() and related
+functions allow you to query the system catalogs. This is the
+preferred way to access system tables, because the first call to
+the cache loads the needed rows, and future requests can return the
+results without accessing the base table. The caches use system
+table indexes to look up tuples. A list of available caches is
+located in src/backend/utils/cache/syscache.c.
+src/backend/utils/cache/lsyscache.c contains many
+column-specific cache lookup functions.
+
+The rows returned are cache-owned versions of the heap rows.
+Therefore, you must not modify or delete the tuple returned by
+SearchSysCache(). What you should do is release it
+with ReleaseSysCache() when you are done using it; this
+informs the cache that it can discard that tuple if necessary. If
+you neglect to call ReleaseSysCache(), then the cache entry
+will remain locked in the cache until end of transaction, which is
+tolerable but not very desirable.
+
+If you can't use the system cache, you will need to retrieve the
+data directly from the heap table, using the buffer cache that is
+shared by all backends. The backend automatically takes care of
+loading the rows into the buffer cache.
+
+Open the table with heap_open(). You can then start a
+table scan with heap_beginscan(), then use
+heap_getnext() and continue as long as
+HeapTupleIsValid() returns true. Then do a
+heap_endscan(). Keys can be assigned to the
+scan. No indexes are used, so all rows are going to be
+compared to the keys, and only the valid rows returned.
+
+You can also use heap_fetch() to fetch rows by block
+number/offset. While scans automatically lock/unlock rows from the
+buffer cache, with heap_fetch(), you must pass a
+Buffer pointer, and ReleaseBuffer() it when
+completed.
+
+Once you have the row, you can get data that is common to all
+tuples, like t_self and t_oid, by merely accessing
+the HeapTuple structure entries. If you need a
+table-specific column, you should take the HeapTuple pointer, and
+use the GETSTRUCT() macro to access the table-specific start
+of the tuple. You then cast the pointer as a Form_pg_proc
+pointer if you are accessing the pg_proc table, or
+Form_pg_type if you are accessing pg_type. You can then
+access the columns by using a structure pointer:
+
+
+((Form_pg_class) GETSTRUCT(tuple))->relnatts
+
+
+
+You must not directly change live tuples in this way. The
+best way is to use heap_modifytuple() and pass it your
+original tuple, and the values you want changed. It returns a
+palloc'ed tuple, which you pass to heap_replace(). You can
+delete tuples by passing the tuple's t_self to
+heap_destroy(). You use t_self for
+heap_update() too. Remember, tuples can be either system
+cache copies, which may go away after you call
+ReleaseSysCache(), or read directly from disk buffers, which
+go away when you heap_getnext(), heap_endscan, or
+ReleaseBuffer(), in the heap_fetch() case. Or it may
+be a palloc'ed tuple, that you must pfree() when finished.
+2.2) Why are table, column, type, function,
+view names sometimes referenced as Name or NameData,
+and sometimes as char *?
+
+Table, column, type, function, and view names are stored in
+system tables in columns of type Name. Name is a
+fixed-length, null-terminated type of NAMEDATALEN bytes.
+(The default value for NAMEDATALEN is 64 bytes.)
+
+
+typedef struct nameData
{
char data[NAMEDATALEN];
} NameData;
typedef NameData *Name;
-
-
- Table, column, type, function, and view names that come into the
- backend via user queries are stored as variable-length,
- null-terminated character strings.
+
+
- Many functions are called with both types of names, ie. - heap_open(). Because the Name type is null-terminated, it is - safe to pass it to a function expecting a char *. Because there are - many cases where on-disk names(Name) are compared to user-supplied - names(char *), there are many cases where Name and char * are used - interchangeably.
+Table, column, type, function, and view names that come into the +backend via user queries are stored as variable-length, +null-terminated character strings. +Many functions are called with both types of names, ie. +heap_open(). Because the Name type is null-terminated, it is +safe to pass it to a function expecting a char *. Because there are +many cases where on-disk names(Name) are compared to user-supplied +names(char *), there are many cases where Name and char * are used +interchangeably.
-We do this because this allows a consistent way to pass data - inside the backend in a flexible way. Every node has a - NodeTag which specifies what type of data is inside the - Node. Lists are groups of Nodes chained together as a - forward-linked list.
+We do this because this allows a consistent way to pass data +inside the backend in a flexible way. Every node has a +NodeTag which specifies what type of data is inside the +Node. Lists are groups of Nodes chained together as a +forward-linked list.
-Here are some of the List manipulation commands:
+Here are some of the List manipulation commands:
--+ +You can print nodes easily inside gdb. First, to disable +output truncation when you use the gdb print command: +-
+- lfirst(i)
++- You can print nodes easily inside gdb. First, to disable - output truncation when you use the gdb print command: -+
-- lfirst(i), lfirst_int(i), lfirst_oid(i)
-- return the data at list element i.
+- return the data (a point, inteter and OID respectively) at list +element i.
-- lnext(i)
+- lnext(i)
-- return the next list element after i.
+- return the next list element after i.
-- foreach(i, list)
+- foreach(i, list)
-- - loop through list, assigning each list element to - i. It is important to note that i is a List *, - not the data in the List element. You need to use - lfirst(i) to get at the data. Here is a typical code - snippet that loops through a List containing Var *'s - and processes each one: -
--+List *i, *list; +
+- loop through list, assigning each list element to +i. It is important to note that i is a List *, not +the data in the List element. You need to use +lfirst(i) to get at the data. Here is a typical code snippet +that loops through a List containing Var *'s and processes +each one: +
++-List *list; + ListCell *i; foreach(i, list) { @@ -788,109 +838,114 @@ /* process var here */ } -
-- lcons(node, list)
+- lcons(node, list)
-- add node to the front of list, or create a - new list with node if list is NIL.
+- add node to the front of list, or create a new +list with node if list is NIL.
-- lappend(list, node)
+- lappend(list, node)
-- add node to the end of list. This is more - expensive that lcons.
+- add node to the end of list. This is more +expensive that lcons.
-- nconc(list1, list2)
+- nconc(list1, list2)
-- Concat list2 on to the end of list1.
+- Concat list2 on to the end of list1.
-- length(list)
+- length(list)
-- return the length of the list.
+- return the length of the list.
-- nth(i, list)
+- nth(i, list)
-- return the i'th element in list.
+- return the i'th element in list.
-- lconsi, ...
+- lconsi, ...
-- There are integer versions of these: lconsi, lappendi, - etc. Also versions for OID lists: lconso, lappendo, etc.
--- Instead of printing values in gdb format, you can use the next two - commands to print out List, Node, and structure contents in a - verbose format that is easier to understand. List's are unrolled - into nodes, and nodes are printed in detail. The first prints in a - short format, and the second in a long format: -(gdb) set print elements 0 -
--(gdb) call print(any_pointer) +
- There are integer versions of these: lconsi, lappendi, +etc. Also versions for OID lists: lconso, lappendo, +etc.
+
+(gdb) set print elements 0
+
+
+
+Instead of printing values in gdb format, you can use the next two
+commands to print out List, Node, and structure contents in a
+verbose format that is easier to understand. List's are unrolled
+into nodes, and nodes are printed in detail. The first prints in a
+short format, and the second in a long format:
+
+(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
-
-
- The output appears in the postmaster log file, or on your screen if
- you are running a backend directly without a postmaster.
+
+
- The structures passing around from the parser, rewrite, - optimizer, and executor require quite a bit of support. Most - structures have support routines in src/backend/nodes used - to create, copy, read, and output those structures. Make sure you - add support for your new field to these files. Find any other - places the structure may need code for your new field. mkid - is helpful with this (see above).
+The structures passing around from the parser, rewrite, +optimizer, and executor require quite a bit of support. Most +structures have support routines in src/backend/nodes used +to create, copy, read, and output those structures (in particular, +the files copyfuncs.c and equalfuncs.c. Make sure you +add support for your new field to these files. Find any other +places the structure may need code for your new field. mkid +is helpful with this (see 1.9).
-palloc() and pfree() are used in place of malloc() - and free() because we find it easier to automatically free all - memory allocated when a query completes. This assures us that all - memory that was allocated gets freed even if we have lost track of - where we allocated it. There are special non-query contexts that - memory can be allocated in. These affect when the allocated memory - is freed by the backend.
+palloc() and pfree() are used in place of malloc() +and free() because we find it easier to automatically free all +memory allocated when a query completes. This assures us that all +memory that was allocated gets freed even if we have lost track of +where we allocated it. There are special non-query contexts that +memory can be allocated in. These affect when the allocated memory +is freed by the backend.
-ereport() is used to send messages to the front-end, and - optionally terminate the current query being processed. The first - parameter is an ereport level of DEBUG (levels 1-5), LOG, - INFO, NOTICE, ERROR, FATAL, or - PANIC. NOTICE prints on the user's terminal and the - postmaster logs. INFO prints only to the user's terminal and - LOG prints only to the server logs. (These can be changed - from postgresql.conf.) ERROR prints in both places, - and terminates the current query, never returning from the call. - FATAL terminates the backend process. The remaining - parameters of ereport are a printf-style set of - parameters to print.
+ereport() is used to send messages to the front-end, and +optionally terminate the current query being processed. The first +parameter is an ereport level of DEBUG (levels 1-5), +LOG, INFO, NOTICE, ERROR, FATAL, +or PANIC. NOTICE prints on the user's terminal and +the postmaster logs. INFO prints only to the user's terminal +and LOG prints only to the server logs. (These can be +changed from postgresql.conf.) ERROR prints in both +places, and terminates the current query, never returning from the +call. FATAL terminates the backend process. The remaining +parameters of ereport are a printf-style set of +parameters to print.
-ereport(ERROR) frees most memory and open file descriptors so - you don't need to clean these up before the call.
+ereport(ERROR) frees most memory and open file +descriptors so you don't need to clean these up before the +call.
-Normally, transactions can not see the rows they modify. This
- allows UPDATE foo SET x = x + 1
to work correctly.
Normally, transactions can not see the rows they modify. This
+allows UPDATE foo SET x = x + 1
to work correctly.
However, there are cases where a transactions needs to see rows - affected in previous parts of the transaction. This is accomplished - using a Command Counter. Incrementing the counter allows - transactions to be broken into pieces so each piece can see rows - modified by previous pieces. CommandCounterIncrement() - increments the Command Counter, creating a new part of the - transaction.
+However, there are cases where a transactions needs to see rows +affected in previous parts of the transaction. This is accomplished +using a Command Counter. Incrementing the counter allows +transactions to be broken into pieces so each piece can see rows +modified by previous pieces. CommandCounterIncrement() +increments the Command Counter, creating a new part of the +transaction.
+ + - -