version 1.02.27-cvs

This commit is contained in:
haad 2008-07-15 13:51:16 +00:00
parent 243dd1c4d0
commit 4c51be8f8c
214 changed files with 174854 additions and 0 deletions

340
external/gpl2/libdevmapper/dist/COPYING vendored Normal file
View File

@ -0,0 +1,340 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

View File

@ -0,0 +1,504 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that
there is no warranty for the free library. Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it becomes
a de-facto standard. To achieve this, non-free programs must be
allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (1) uses at run time a
copy of the library already present on the user's computer system,
rather than copying library functions into the executable, and (2)
will operate properly with a modified version of the library, if
the user installs one, as long as the modified version is
interface-compatible with the version that the work was made with.
c) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
d) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
e) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change. You can do so by permitting
redistribution under these terms (or, alternatively, under the terms of the
ordinary General Public License).
To apply these terms, attach the following notices to the library. It is
safest to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
<one line to give the library's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the library, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the
library `Frob' (a library for tweaking knobs) written by James Random Hacker.
<signature of Ty Coon>, 1 April 1990
Ty Coon, President of Vice
That's all there is to it!

102
external/gpl2/libdevmapper/dist/INSTALL vendored Normal file
View File

@ -0,0 +1,102 @@
Device Mapper Installation
==========================
1) Generate custom makefiles.
Run the 'configure' script from the top directory.
Example:
./configure
2.6 kernels now already contain basic device-mapper patches.
If you are using a 2.4 kernel that does NOT contain the device-mapper
mapper source AND you want to try patching it automatically (described
in step 2) then you need to tell 'configure' where your kernel
source directory is using --with-kernel-dir. Otherwise, don't
supply this parameter or you may get compilation failures.
Example:
./configure --with-kernel-dir=/usr/src/linux-2.4.26-rc1
The same userspace library and tools work with both 2.4 and 2.6 kernels
because they share the same device-mapper interface (version 4).
If you also still need backwards-compatibility with the obsolete
version 1 interface, then you must compile against 2.4 kernel
headers (not 2.6 ones) and:
./configure --enable-compat
Other parameters let you change the installation & working directories.
2) ONLY FOR 2.4 KERNELS:
If your kernel does not already contain device-mapper, patch,
configure and build a new kernel with it.
If there is a patch for your kernel included in this package and you
gave 'configure' appropriate parameters in step 1, you can run
'make apply-patches' from the top directory to apply the patches.
(This also attempts to apply the VFS lock patch, which will fail if
you've already applied it. For 2.4.21 there's a choice of two VFS lock
patches: the experimental one is destined to replace the old-style one.
The VFS lock patch is designed to suspend the filesystem whenever
snapshots are created so that they contain a consistent filsystem.)
Configure, build and install your kernel in the normal way, selecting
'Device mapper support' from the 'Multiple devices driver support' menu.
If you wish to use 'pvmove' you also need to select
'Mirror (RAID-1) support'.
If you are patching by hand, the patches are stored in the
'patches' subdirectory. The name of each patch contains the kernel
version it was generated against and whether it is for the 'fs' or
'ioctl' interface. Current development effort is concentrated
on the 'ioctl' interface. (Use CVS to get the older 'fs' patches if
you want - see note at end.)
patches/common holds the constituent patches.
You may need to use these if some of the patches (e.g. mempool)
have already been applied to your kernel. See patches/common/README.
You should also apply the VFS lock patch (but not required if you're only
using ext2).
Running 'make symlinks' from the 'kernel' subdirectory will put symbolic
links into your kernel tree pointing back at the device-mapper source files.
If you do this, you'll probably also need to apply the VFS patch and all
the constituent patches in patches/common except for the devmapper one.
3) Build and install the shared library (libdevmapper.so) that
provides the API.
Run 'make' from the top directory.
Example: make install
The DESTDIR environment variable is supported (e.g. for packaging).
4) You can now use 'dmsetup' to test the API.
Read the dmsetup man page for more information.
Or proceed to install the LVM2 tools.
Note if you are upgrading from a very old release
=================================================
/dev/mapper was called /dev/device-mapper prior to 0.96.04.
Consequently scripts/devmap_mknod.sh has been updated,
but this script is now obsolete because its functionality has
been incorporated into the library.
Notes about the alternative device-mapper filesystem interface
==============================================================
The original 2.4 "dmfs" filesystem interface which mapped
device-mapper operations into filesystem operations has been
abandoned. It requires a very old kernel and is missing lots of
features. The userspace code (lib/fs in CVS) was finally
removed from the tree in 1.02.23.

63
external/gpl2/libdevmapper/dist/INTRO vendored Normal file
View File

@ -0,0 +1,63 @@
An introduction to the device mapper
====================================
The goal of this driver is to support volume management.
The driver enables the definition of new block devices composed of
ranges of sectors of existing devices. This can be used to define
disk partitions - or logical volumes. This light-weight kernel
component can support user-space tools for logical volume management.
The driver maps ranges of sectors for the new logical device onto
'mapping targets' according to a mapping table. Currently the mapping
table must be supplied to the driver through an ioctl interface.
Earlier versions of the driver also had a custom file system interface
(dmfs), but we stopped work on this because of pressure of time.
The mapping table consists of an ordered list of rules of the form:
<start> <length> <target> [<target args> ...]
which map <length> sectors beginning at <start> to a target.
Every sector on the new device must be specified - there must be no
gaps between the rules. The first rule has <start> = 0.
Each subsequent rule starts from the previous <start> + <length> + 1.
When a sector of the new logical device is accessed, the make_request
function looks up the correct target and then passes the request on to
the target to perform the remapping according to its arguments.
The following targets are available:
linear
striped
error
snapshot
mirror
The 'linear' target takes as arguments a target device name (eg
/dev/hda6) and a start sector and maps the range of sectors linearly
to the target.
The 'striped' target is designed to handle striping across physical
volumes. It takes as arguments the number of stripes and the striping
chunk size followed by a list of pairs of device name and sector.
The 'error' target causes any I/O to the mapped sectors to fail. This
is useful for defining gaps in the new logical device.
The 'snapshot' target supports asynchronous snapshots.
See http://people.sistina.com/~thornber/snap_performance.html.
The 'mirror' target is used to implement pvmove.
In normal scenarios the mapping tables will remain small.
A btree structure is used to hold the sector range -> target mapping.
Since we know all the entries in the btree in advance we can make a
very compact tree, omitting pointers to child nodes as child node
locations can be calculated.
Benchmarking with bonnie++ suggests that this is certainly no slower
than current LVM.
Sistina UK
Updated 30/04/2003

View File

@ -0,0 +1,67 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# MA 02111-1307, USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
kernelvsn = @kernelvsn@
SUBDIRS = include man
ifeq ("@INTL@", "yes")
SUBDIRS += po
endif
ifeq ("@BUILD_DMEVENTD@", "yes")
SUBDIRS += dmeventd
else
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS += dmeventd
endif
endif
SUBDIRS += lib dmsetup
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS += kernel po
endif
include make.tmpl
lib: include
dmsetup: lib
# dmeventd: lib multilog
dmeventd: lib
po: dmsetup dmeventd
ifeq ("@INTL@", "yes")
lib.pofile: include.pofile
dmsetup.pofile: lib.pofile
dmeventd.pofile: lib.pofile
po.pofile: dmsetup.pofile
pofile: po.pofile
endif
.PHONY: apply-patches install_static_lib
apply-patches:
patch -d $(kerneldir) -p1 -i \
`pwd`/patches/linux-$(kernelvsn)-devmapper-$(interface).patch
patch -d $(kerneldir) -p1 -i \
`pwd`/patches/linux-$(kernelvsn)-VFS-lock.patch
install_static_lib: all
$(MAKE) -C lib install_static

29
external/gpl2/libdevmapper/dist/README vendored Normal file
View File

@ -0,0 +1,29 @@
This directory tree contains the supporting userspace files
(libdevmapper and dmsetup) you need when working with device-mapper.
The patches subdirectory also includes up-to-date device-mapper kernel
patches for 2.4.26-rc1 and old patches for 2.4.20, 2.4.21 and 2.4.22
onwards. 2.6 kernels already contain the device-mapper core, but (as at
March 2004) you need to apply development patches if you want snapshot
and pvmove functionality.
For more information about device-mapper please read the INTRO file.
Installation instructions are in INSTALL.
The Device-mapper Resource Page has links to all the primary resources
for device-mapper, including additional patches undergoing development
and testing:
http://sources.redhat.com/dm/
Tarballs are available from ftp://sources.redhat.com/pub/dm/
To access the CVS tree use:
cvs -d :pserver:cvs@sources.redhat.com:/cvs/dm login
CVS password: cvs
cvs -d :pserver:cvs@sources.redhat..com:/cvs/dm checkout device-mapper
The mailing list for discussions and bug reports is:
dm-devel@redhat.com
Subscribe from https://www.redhat.com/mailman/listinfo/dm-devel

View File

@ -0,0 +1 @@
1.02.27-cvs (2008-06-25)

View File

@ -0,0 +1,346 @@
Version 1.02.27 - 25th June 2008
================================
Align struct memblock in dbg_malloc for sparc.
Add --unquoted and --rows to dmsetup.
Avoid compiler warning about cast in dmsetup.c's OFFSET_OF macro.
Fix inverted no_flush debug message.
Remove --enable-jobs from configure. (Set at runtime instead.)
Bring configure.in and list.h into line with the lvm2 versions.
Version 1.02.26 - 6th June 2008
===============================
Initialise params buffer to empty string in _emit_segment.
Skip add_dev_node when ioctls disabled.
Make dm_hash_iter safe against deletion.
Accept a NULL pointer to dm_free silently.
Add tables_loaded, readonly and suspended columns to reports.
Add --nameprefixes to dmsetup.
Add field name prefix option to reporting functions.
Calculate string size within dm_pool_grow_object.
Version 1.02.25 - 10th April 2008
=================================
Remove redundant if-before-free tests.
Use log_warn for reporting field help text instead of log_print.
Change cluster mirror log type name (s/clustered_/clustered-/)
Version 1.02.24 - 20th December 2007
====================================
Fix deptree to pass new name to _resume_node after a rename.
Suppress other node operations if node is deleted.
Add node operation stack debug messages.
Report error when empty device name passed to readahead functions.
Fix minimum readahead debug message.
Version 1.02.23 - 5th December 2007
===================================
Update dm-ioctl.h after removal of compat code.
Add readahead support to libdevmapper and dmsetup.
Fix double free in a libdevmapper-event error path.
Fix configure --with-dmeventd-path substitution.
Allow a DM_DEV_DIR environment variable to override /dev in dmsetup.
Create a libdevmapper.so.$LIB_VERSION symlink within the build tree.
Avoid static link failure with some SELinux libraries that require libpthread.
Remove obsolete dmfs code from tree and update INSTALL.
Version 1.02.22 - 21st August 2007
==================================
Fix inconsistent licence notices: executables are GPLv2; libraries LGPLv2.1.
Update to use autoconf 2.61, while still supporting 2.57.
Avoid repeated dm_task free on some dm_event_get_registered_device errors.
Introduce log_sys_* macros from LVM2.
Export dm_fclose and dm_create_dir; remove libdm-file.h.
Don't log EROFS mkdir failures in _create_dir_recursive (for LVM2).
Add fclose wrapper dm_fclose that catches write failures (using ferror).
Version 1.02.21 - 13th July 2007
================================
Introduce _LOG_STDERR to send log_warn() messages to stderr not stdout.
Fix dmsetup -o devno string termination. (1.02.20)
Version 1.02.20 - 15th June 2007
================================
Fix default dmsetup report buffering and add --unbuffered.
Add tree-based and dependency fields to dmsetup reports.
Version 1.02.19 - 27th April 2007
=================================
Standardise protective include file #defines.
Add regex functions to library.
Avoid trailing separator in reports when there are hidden sort fields.
Fix segfault in 'dmsetup status' without --showkeys against crypt target.
Deal with some more compiler warnings.
Introduce _add_field() and _is_same_field() to libdm-report.c.
Fix some libdevmapper-event and dmeventd memory leaks.
Remove unnecessary memset() return value checks.
Fix a few leaks in reporting error paths. [1.02.15+]
Version 1.02.18 - 13th February 2007
====================================
Improve dmeventd messaging protocol: drain pipe and tag messages.
Version 1.02.17 - 29th January 2007
===================================
Add recent reporting options to dmsetup man page.
Revise some report fields names.
Add dmsetup 'help' command and update usage text.
Use fixed-size fields in report interface and reorder.
Version 1.02.16 - 25th January 2007
===================================
Add some missing close() and fclose() return value checks.
Migrate dmsetup column-based output over to new libdevmapper report framework.
Add descriptions to reporting field definitions.
Add a dso-private variable to dmeventd dso interface.
Add dm_event_handler_[gs]et_timeout functions.
Streamline dm_report_field_* interface.
Add cmdline debug & version options to dmeventd.
Add DM_LIB_VERSION definition to configure.h.
Suppress 'Unrecognised field' error if report field is 'help'.
Add --separator and --sort to dmsetup (unused).
Make alignment flag optional when specifying report fields.
Version 1.02.15 - 17th January 2007
===================================
Add basic reporting functions to libdevmapper.
Fix a malloc error path in dmsetup message.
More libdevmapper-event interface changes and fixes.
Rename dm_saprintf() to dm_asprintf().
Report error if NULL pointer is supplied to dm_strdup_aux().
Reinstate dm_event_get_registered_device.
Version 1.02.14 - 11th January 2007
===================================
Add dm_saprintf().
Use CFLAGS when linking so mixed sparc builds can supply -m64.
Add dm_tree_use_no_flush_suspend().
Lots of dmevent changes including revised interface.
Export dm_basename().
Cope with a trailing space when comparing tables prior to possible reload.
Fix dmeventd to cope if monitored device disappears.
Version 1.02.13 - 28 Nov 2006
=============================
Update dmsetup man page (setgeometry & message).
Fix dmsetup free after getline with debug.
Suppress encryption key in 'dmsetup table' output unless --showkeys supplied.
Version 1.02.12 - 13 Oct 2006
=============================
Avoid deptree attempting to suspend a device that's already suspended.
Version 1.02.11 - 12 Oct 2006
==============================
Add suspend noflush support.
Add basic dmsetup loop support.
Switch dmsetup to use dm_malloc and dm_free.
Version 1.02.10 - 19 Sep 2006
=============================
Add dm_snprintf(), dm_split_words() and dm_split_lvm_name() to libdevmapper.
Reorder mm bounds_check code to reduce window for a dmeventd race.
Version 1.02.09 - 15 Aug 2006
=============================
Add --table argument to dmsetup for a one-line table.
Abort if errors are found during cmdline option processing.
Add lockfs indicator to debug output.
Version 1.02.08 - 17 July 2006
==============================
Append full patch to check in emails.
Avoid duplicate dmeventd subdir with 'make distclean'.
Update dmsetup man page.
Add --force to dmsetup remove* to load error target.
dmsetup remove_all also performs mknodes.
Don't suppress identical table reloads if permission changes.
Fix corelog segment line.
Suppress some compiler warnings.
Version 1.02.07 - 11 May 2006
=============================
Add DM_CORELOG flag to dm_tree_node_add_mirror_target().
Avoid a dmeventd compiler warning.
Version 1.02.06 - 10 May 2006
=============================
Move DEFS into configure.h.
Fix leaks in error paths found by coverity.
Remove dmsetup line buffer limitation.
Version 1.02.05 - 19 Apr 2006
=============================
Separate install_include target in makefiles.
Separate out DEFS from CFLAGS.
Support pkg-config.
Check for libsepol.
Version 1.02.04 - 14 Apr 2006
=============================
Bring dmsetup man page up-to-date.
Use name-based device refs if kernel doesn't support device number refs.
Fix memory leak (struct dm_ioctl) when struct dm_task is reused.
If _create_and_load_v4 fails part way through, revert the creation.
dmeventd thread/fifo fixes.
Add file & line to dm_strdup_aux().
Add setgeometry.
Version 1.02.03 - 7 Feb 2006
============================
Add exported functions to set uid, gid and mode.
Rename _log to dm_log and export.
Add dm_tree_skip_lockfs.
Fix dm_strdup debug definition.
Fix hash function to avoid using a negative array offset.
Don't inline _find in hash.c and tidy signed/unsigned etc.
Fix libdevmapper.h #endif.
Fix dmsetup version driver version.
Add sync, nosync and block_on_error mirror log parameters.
Add hweight32.
Fix dmeventd build.
Version 1.02.02 - 2 Dec 2005
============================
dmeventd added.
Export dm_task_update_nodes.
Use names instead of numbers in messages when ioctls fail.
Version 1.02.01 - 23 Nov 2005
=============================
Resume snapshot-origins last.
Drop leading zeros from dm_format_dev.
Suppress attempt to reload identical table.
Additional LVM- prefix matching for transitional period.
Version 1.02.00 - 10 Nov 2005
=============================
Added activation functions to library.
Added return macros.
Also suppress error if device doesn't exist with DM_DEVICE_STATUS.
Export dm_set_selinux_context().
Add dm_driver_version().
Added dependency tree functions to library.
Added hash, bitset, pool, dbg_malloc to library.
Added ls --tree to dmsetup.
Added dmsetup --nolockfs support for suspend/reload.
Version 1.01.05 - 26 Sep 2005
=============================
Resync list.h with LVM2.
Remember increased buffer size and use for subsequent calls.
On 'buffer full' condition, double buffer size and repeat ioctl.
Fix termination of getopt_long() option array.
Report 'buffer full' condition with v4 ioctl as well as with v1.
Version 1.01.04 - 2 Aug 2005
============================
Fix dmsetup ls -j and status --target with empty table.
Version 1.01.03 - 13 Jun 2005
=============================
Use matchpathcon mode parameter.
Fix configure script to re-enable selinux.
Version 1.01.02 - 17 May 2005
=============================
Call dm_lib_exit() and dm_lib_release() automatically now.
Add --target <target_type> filter to dmsetup table/status/ls.
Add --exec <command> to dmsetup ls.
Fix dmsetup getopt_long usage.
Version 1.01.01 - 29 Mar 2005
=============================
Update dmsetup man page.
Drop-in devmap_name replacement.
Add option to compile without ioctl for testing.
Fix DM_LIB_VERSION sed.
Version 1.01.00 - 17 Jan 2005
=============================
Add dm_task_no_open_count() to skip getting open_count.
Version 1.00.21 - 7 Jan 2005
============================
Fix /proc/devices parsing.
Version 1.00.20 - 6 Jan 2005
============================
Attempt to fix /dev/mapper/control transparently if it's wrong.
Configuration-time option for setting uid/gid/mode for /dev/mapper nodes.
Update kernel patches for 2.4.27/2.4.28-pre-4 (includes minor fixes).
Add --noheadings columns option for colon-separated dmsetup output.
Support device referencing by uuid or major/minor.
Warn if kernel data didn't fit in buffer.
Fix a printf.
Version 1.00.19 - 3 July 2004
=============================
More autoconf fixes.
Fix a dmsetup newline.
Fix device number handling for 2.6 kernels.
Version 1.00.18 - 20 Jun 2004
=============================
Fix a uuid free in libdm-iface.
Fix a targets string size calc in driver.
Add -c to dmsetup for column-based output.
Add target message-passing ioctl.
Version 1.00.17 - 17 Apr 2004
=============================
configure --with-owner= --with-group= to avoid -o and -g args to 'install'
Fix library selinux linking.
Version 1.00.16 - 16 Apr 2004
=============================
Ignore error setting selinux file context if fs doesn't support it.
Version 1.00.15 - 7 Apr 2004
============================
Fix status overflow check in kernel patches.
Version 1.00.14 - 6 Apr 2004
============================
Fix static selinux build.
Version 1.00.13 - 6 Apr 2004
============================
Add some basic selinux support.
Version 1.00.12 - 6 Apr 2004
============================
Fix dmsetup.static install.
Version 1.00.11 - 5 Apr 2004
============================
configure --enable-static_link does static build in addition to dynamic.
Moved Makefile library targets definition into template.
Version 1.00.10 - 2 Apr 2004
============================
Fix DESTDIR handling.
Static build installs to dmsetup.static.
Basic support for internationalisation.
Minor Makefile tidy-ups/fixes.
Version 1.00.09 - 31 Mar 2004
=============================
Update copyright notices to Red Hat.
Move full mknodes functionality from dmsetup into libdevmapper.
Avoid sscanf %as for uClibc compatibility.
Cope if DM_LIST_VERSIONS is not defined.
Add DM_LIST_VERSIONS functionality to kernel patches.
Generate new kernel patches for 2.4.26-rc1.
Version 1.00.08 - 27 Feb 2004
=============================
Added 'dmsetup targets'.
Added event_nr support to 'dmsetup wait'.
Updated dmsetup man page.
Allow logging function to be reset to use internal one.
Bring log macros in line with LVM2 ones.
Added 'make install_static_lib' which installs libdevmapper.a.
Made configure/makefiles closer to LVM2 versions.
Fixed DESTDIR for make install/install_static_lib.
Updated README/INSTALL to reflect move to sources.redhat.com.
Updated autoconf files to 2003-06-17.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,507 @@
#!/bin/sh
# install - install a program, script, or datafile
scriptversion=2006-10-14.15
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# `make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch.
nl='
'
IFS=" "" $nl"
# set DOITPROG to echo to test this script
# Don't use :- since 4.3BSD and earlier shells don't like it.
doit="${DOITPROG-}"
if test -z "$doit"; then
doit_exec=exec
else
doit_exec=$doit
fi
# Put in absolute file names if you don't have them in your path;
# or use environment vars.
mvprog="${MVPROG-mv}"
cpprog="${CPPROG-cp}"
chmodprog="${CHMODPROG-chmod}"
chownprog="${CHOWNPROG-chown}"
chgrpprog="${CHGRPPROG-chgrp}"
stripprog="${STRIPPROG-strip}"
rmprog="${RMPROG-rm}"
mkdirprog="${MKDIRPROG-mkdir}"
posix_glob=
posix_mkdir=
# Desired mode of installed file.
mode=0755
chmodcmd=$chmodprog
chowncmd=
chgrpcmd=
stripcmd=
rmcmd="$rmprog -f"
mvcmd="$mvprog"
src=
dst=
dir_arg=
dstarg=
no_target_directory=
usage="Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
-c (ignored)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-s $stripprog installed files.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
--help display this help and exit.
--version display version info and exit.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG
"
while test $# -ne 0; do
case $1 in
-c) shift
continue;;
-d) dir_arg=true
shift
continue;;
-g) chgrpcmd="$chgrpprog $2"
shift
shift
continue;;
--help) echo "$usage"; exit $?;;
-m) mode=$2
shift
shift
case $mode in
*' '* | *' '* | *'
'* | *'*'* | *'?'* | *'['*)
echo "$0: invalid mode: $mode" >&2
exit 1;;
esac
continue;;
-o) chowncmd="$chownprog $2"
shift
shift
continue;;
-s) stripcmd=$stripprog
shift
continue;;
-t) dstarg=$2
shift
shift
continue;;
-T) no_target_directory=true
shift
continue;;
--version) echo "$0 $scriptversion"; exit $?;;
--) shift
break;;
-*) echo "$0: invalid option: $1" >&2
exit 1;;
*) break;;
esac
done
if test $# -ne 0 && test -z "$dir_arg$dstarg"; then
# When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dstarg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dstarg"
shift # fnord
fi
shift # arg
dstarg=$arg
done
fi
if test $# -eq 0; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call `install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
if test -z "$dir_arg"; then
trap '(exit $?); exit' 1 2 13 15
# Set umask so as not to create temps with too-generous modes.
# However, 'strip' requires both read and write access to temps.
case $mode in
# Optimize common cases.
*644) cp_umask=133;;
*755) cp_umask=22;;
*[0-7])
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw='% 200'
fi
cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
*)
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw=,u+rw
fi
cp_umask=$mode$u_plus_rw;;
esac
fi
for src
do
# Protect names starting with `-'.
case $src in
-*) src=./$src ;;
esac
if test -n "$dir_arg"; then
dst=$src
dstdir=$dst
test -d "$dstdir"
dstdir_status=$?
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dstarg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dstarg
# Protect names starting with `-'.
case $dst in
-*) dst=./$dst ;;
esac
# If destination is a directory, append the input filename; won't work
# if double slashes aren't ignored.
if test -d "$dst"; then
if test -n "$no_target_directory"; then
echo "$0: $dstarg: Is a directory" >&2
exit 1
fi
dstdir=$dst
dst=$dstdir/`basename "$src"`
dstdir_status=0
else
# Prefer dirname, but fall back on a substitute if dirname fails.
dstdir=`
(dirname "$dst") 2>/dev/null ||
expr X"$dst" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
X"$dst" : 'X\(//\)[^/]' \| \
X"$dst" : 'X\(//\)$' \| \
X"$dst" : 'X\(/\)' \| . 2>/dev/null ||
echo X"$dst" |
sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
s//\1/
q
}
/^X\(\/\/\)[^/].*/{
s//\1/
q
}
/^X\(\/\/\)$/{
s//\1/
q
}
/^X\(\/\).*/{
s//\1/
q
}
s/.*/./; q'
`
test -d "$dstdir"
dstdir_status=$?
fi
fi
obsolete_mkdir_used=false
if test $dstdir_status != 0; then
case $posix_mkdir in
'')
# Create intermediate dirs using mode 755 as modified by the umask.
# This is like FreeBSD 'install' as of 1997-10-28.
umask=`umask`
case $stripcmd.$umask in
# Optimize common cases.
*[2367][2367]) mkdir_umask=$umask;;
.*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;;
*[0-7])
mkdir_umask=`expr $umask + 22 \
- $umask % 100 % 40 + $umask % 20 \
- $umask % 10 % 4 + $umask % 2
`;;
*) mkdir_umask=$umask,go-w;;
esac
# With -d, create the new directory with the user-specified mode.
# Otherwise, rely on $mkdir_umask.
if test -n "$dir_arg"; then
mkdir_mode=-m$mode
else
mkdir_mode=
fi
posix_mkdir=false
case $umask in
*[123567][0-7][0-7])
# POSIX mkdir -p sets u+wx bits regardless of umask, which
# is incompatible with FreeBSD 'install' when (umask & 300) != 0.
;;
*)
tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
trap 'ret=$?; rmdir "$tmpdir/d" "$tmpdir" 2>/dev/null; exit $ret' 0
if (umask $mkdir_umask &&
exec $mkdirprog $mkdir_mode -p -- "$tmpdir/d") >/dev/null 2>&1
then
if test -z "$dir_arg" || {
# Check for POSIX incompatibilities with -m.
# HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
# other-writeable bit of parent directory when it shouldn't.
# FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
ls_ld_tmpdir=`ls -ld "$tmpdir"`
case $ls_ld_tmpdir in
d????-?r-*) different_mode=700;;
d????-?--*) different_mode=755;;
*) false;;
esac &&
$mkdirprog -m$different_mode -p -- "$tmpdir" && {
ls_ld_tmpdir_1=`ls -ld "$tmpdir"`
test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
}
}
then posix_mkdir=:
fi
rmdir "$tmpdir/d" "$tmpdir"
else
# Remove any dirs left behind by ancient mkdir implementations.
rmdir ./$mkdir_mode ./-p ./-- 2>/dev/null
fi
trap '' 0;;
esac;;
esac
if
$posix_mkdir && (
umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
)
then :
else
# The umask is ridiculous, or mkdir does not conform to POSIX,
# or it failed possibly due to a race condition. Create the
# directory the slow way, step by step, checking for races as we go.
case $dstdir in
/*) prefix=/ ;;
-*) prefix=./ ;;
*) prefix= ;;
esac
case $posix_glob in
'')
if (set -f) 2>/dev/null; then
posix_glob=true
else
posix_glob=false
fi ;;
esac
oIFS=$IFS
IFS=/
$posix_glob && set -f
set fnord $dstdir
shift
$posix_glob && set +f
IFS=$oIFS
prefixes=
for d
do
test -z "$d" && continue
prefix=$prefix$d
if test -d "$prefix"; then
prefixes=
else
if $posix_mkdir; then
(umask=$mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
# Don't fail if two instances are running concurrently.
test -d "$prefix" || exit 1
else
case $prefix in
*\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
*) qprefix=$prefix;;
esac
prefixes="$prefixes '$qprefix'"
fi
fi
prefix=$prefix/
done
if test -n "$prefixes"; then
# Don't fail if two instances are running concurrently.
(umask $mkdir_umask &&
eval "\$doit_exec \$mkdirprog $prefixes") ||
test -d "$dstdir" || exit 1
obsolete_mkdir_used=true
fi
fi
fi
if test -n "$dir_arg"; then
{ test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
{ test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
else
# Make a couple of temp file names in the proper directory.
dsttmp=$dstdir/_inst.$$_
rmtmp=$dstdir/_rm.$$_
# Trap to clean up those temp files at exit.
trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
# Copy the file name to the temp name.
(umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } \
&& { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } \
&& { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } \
&& { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
# Now rename the file to the real destination.
{ $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null \
|| {
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
if test -f "$dst"; then
$doit $rmcmd -f "$dst" 2>/dev/null \
|| { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null \
&& { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; }; }\
|| {
echo "$0: cannot unlink or rename $dst" >&2
(exit 1); exit 1
}
else
:
fi
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dst"
}
} || exit 1
trap '' 0
fi
done
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

10229
external/gpl2/libdevmapper/dist/configure vendored Executable file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,538 @@
###############################################################################
## Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
## Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
##
## This file is part of the device-mapper userspace tools.
##
## This copyrighted material is made available to anyone wishing to use,
## modify, copy, or redistribute it subject to the terms and conditions
## of the GNU General Public License v.2.
##
## You should have received a copy of the GNU General Public License
## along with this program; if not, write to the Free Software Foundation,
## Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
################################################################################
AC_PREREQ(2.57)
################################################################################
dnl -- Process this file with autoconf to produce a configure script.
AC_INIT
AC_CONFIG_SRCDIR([lib/libdevmapper.h])
AC_CONFIG_HEADERS(include/configure.h)
################################################################################
dnl -- Setup the directory where autoconf has auxilary files
AC_CONFIG_AUX_DIR(autoconf)
################################################################################
dnl -- Get system type
AC_CANONICAL_TARGET([])
case "$host_os" in
linux*)
COPTIMISE_FLAG="-O2"
CLDFLAGS="$CLDFLAGS -Wl,--version-script,.export.sym"
CLDWHOLEARCHIVE="-Wl,-whole-archive"
CLDNOWHOLEARCHIVE="-Wl,-no-whole-archive"
LDDEPS="$LDDEPS .export.sym"
LIB_SUFFIX=so
DEVMAPPER=yes
ODIRECT=yes
DM_IOCTLS=yes
SELINUX=yes
REALTIME=yes
CLUSTER=internal
FSADM=no
;;
darwin*)
CFLAGS="$CFLAGS -no-cpp-precomp -fno-common"
COPTIMISE_FLAG="-O2"
CLDFLAGS="$CLDFLAGS"
CLDWHOLEARCHIVE="-all_load"
CLDNOWHOLEARCHIVE=
LIB_SUFFIX=dylib
DEVMAPPER=yes
ODIRECT=no
DM_IOCTLS=no
SELINUX=no
REALTIME=no
CLUSTER=none
FSADM=no
;;
esac
################################################################################
dnl -- Additional library location
usrlibdir='${prefix}/lib'
################################################################################
dnl -- Check for programs.
AC_PROG_AWK
AC_PROG_CC
dnl probably no longer needed in 2008, but...
AC_PROG_GCC_TRADITIONAL
AC_PROG_INSTALL
AC_PROG_LN_S
AC_PROG_MAKE_SET
AC_PROG_RANLIB
AC_PATH_PROG(CFLOW_CMD, cflow)
AC_PATH_PROG(CSCOPE_CMD, cscope)
################################################################################
dnl -- Check for header files.
AC_HEADER_DIRENT
AC_HEADER_STDC
AC_HEADER_SYS_WAIT
AC_HEADER_TIME
AC_CHECK_HEADERS([ctype.h dirent.h errno.h fcntl.h getopt.h inttypes.h limits.h \
stdarg.h stdio.h stdlib.h string.h sys/ioctl.h sys/param.h sys/stat.h \
sys/types.h unistd.h], , [AC_MSG_ERROR(bailing out)])
AC_CHECK_HEADERS(termios.h sys/statvfs.h)
################################################################################
dnl -- Check for typedefs, structures, and compiler characteristics.
AC_C_CONST
AC_C_INLINE
AC_CHECK_MEMBERS([struct stat.st_rdev])
AC_TYPE_OFF_T
AC_TYPE_PID_T
AC_TYPE_SIGNAL
AC_TYPE_SIZE_T
AC_TYPE_MODE_T
AC_CHECK_MEMBERS([struct stat.st_rdev])
AC_STRUCT_TM
################################################################################
dnl -- Check for functions
AC_CHECK_FUNCS([gethostname getpagesize memset mkdir rmdir munmap setlocale \
strcasecmp strchr strdup strncasecmp strerror strrchr strstr strtol strtoul \
uname], , [AC_MSG_ERROR(bailing out)])
AC_FUNC_ALLOCA
AC_FUNC_CLOSEDIR_VOID
AC_FUNC_FORK
AC_FUNC_LSTAT
AC_FUNC_MALLOC
AC_FUNC_MEMCMP
AC_FUNC_MMAP
AC_FUNC_STAT
AC_FUNC_STRTOD
AC_FUNC_VPRINTF
################################################################################
dnl -- Prefix is /usr by default, the exec_prefix default is setup later
AC_PREFIX_DEFAULT(/usr)
################################################################################
dnl -- Setup the ownership of the files
AC_MSG_CHECKING(file owner)
OWNER="root"
AC_ARG_WITH(user,
[ --with-user=USER Set the owner of installed files [[USER=root]] ],
[ OWNER="$withval" ])
AC_MSG_RESULT($OWNER)
if test x$OWNER != x; then
OWNER="-o $OWNER"
fi
################################################################################
dnl -- Setup the group ownership of the files
AC_MSG_CHECKING(group owner)
GROUP="root"
AC_ARG_WITH(group,
[ --with-group=GROUP Set the group owner of installed files [[GROUP=root]] ],
[ GROUP="$withval" ])
AC_MSG_RESULT($GROUP)
if test x$GROUP != x; then
GROUP="-g $GROUP"
fi
################################################################################
dnl -- Setup device node ownership
AC_MSG_CHECKING(device node uid)
AC_ARG_WITH(device-uid,
[ --with-device-uid=UID Set the owner used for new device nodes [[UID=0]] ],
[ DM_DEVICE_UID="$withval" ], [ DM_DEVICE_UID="0" ] )
AC_MSG_RESULT($DM_DEVICE_UID)
################################################################################
dnl -- Setup device group ownership
AC_MSG_CHECKING(device node gid)
AC_ARG_WITH(device-gid,
[ --with-device-gid=UID Set the group used for new device nodes [[GID=0]] ],
[ DM_DEVICE_GID="$withval" ], [ DM_DEVICE_GID="0" ] )
AC_MSG_RESULT($DM_DEVICE_GID)
################################################################################
dnl -- Setup device mode
AC_MSG_CHECKING(device node mode)
AC_ARG_WITH(device-mode,
[ --with-device-mode=MODE Set the mode used for new device nodes [[MODE=0600]] ],
[ DM_DEVICE_MODE="$withval" ], [ DM_DEVICE_MODE="0600" ] )
AC_MSG_RESULT($DM_DEVICE_MODE)
################################################################################
dnl -- Enable debugging
AC_MSG_CHECKING(whether to enable debugging)
AC_ARG_ENABLE(debug, [ --enable-debug Enable debugging],
DEBUG=$enableval, DEBUG=no)
AC_MSG_RESULT($DEBUG)
dnl -- Normally turn off optimisation for debug builds
if test x$DEBUG = xyes; then
COPTIMISE_FLAG=
else
CSCOPE_CMD=
fi
################################################################################
dnl -- Override optimisation
AC_MSG_CHECKING(for C optimisation flag)
AC_ARG_WITH(optimisation,
[ --with-optimisation=OPT C optimisation flag [[OPT=-O2]] ],
[ COPTIMISE_FLAG="$withval" ])
AC_MSG_RESULT($COPTIMISE_FLAG)
################################################################################
dnl -- Compatibility mode
AC_ARG_ENABLE(compat, [ --enable-compat Enable support for old device-mapper versions],
DM_COMPAT=$enableval, DM_COMPAT=no)
################################################################################
dnl -- Disable ioctl
AC_ARG_ENABLE(ioctl, [ --disable-driver Disable calls to device-mapper in the kernel],
DM_IOCTLS=$enableval)
################################################################################
dnl -- Enable dmeventd
AC_ARG_ENABLE(dmeventd, [ --enable-dmeventd Build the new event daemon],
BUILD_DMEVENTD=$enableval, BUILD_DMEVENTD=no)
################################################################################
dnl -- Enable pkg-config
AC_ARG_ENABLE(pkgconfig, [ --enable-pkgconfig Install pkgconfig support],
PKGCONFIG=$enableval, PKGCONFIG=no)
################################################################################
dnl -- Clear default exec_prefix - install into /sbin rather than /usr/sbin
if [[ "x$exec_prefix" = xNONE -a "x$prefix" = xNONE ]];
then exec_prefix="";
fi;
################################################################################
dnl -- getline included in recent libc
AC_CHECK_LIB(c, getline, AC_DEFINE([HAVE_GETLINE], 1,
[Define to 1 if getline is available.]))
################################################################################
dnl -- canonicalize_file_name included in recent libc
AC_CHECK_LIB(c, canonicalize_file_name,
AC_DEFINE([HAVE_CANONICALIZE_FILE_NAME], 1,
[Define to 1 if canonicalize_file_name is available.]))
################################################################################
dnl -- Enables statically-linked tools
AC_MSG_CHECKING(whether to use static linking)
AC_ARG_ENABLE(static_link,
[ --enable-static_link Use this to link the tools to their libraries
statically. Default is dynamic linking],
STATIC_LINK=$enableval, STATIC_LINK=no)
AC_MSG_RESULT($STATIC_LINK)
################################################################################
dnl -- Disable selinux
AC_MSG_CHECKING(whether to enable selinux support)
AC_ARG_ENABLE(selinux, [ --disable-selinux Disable selinux support],
SELINUX=$enableval)
AC_MSG_RESULT($SELINUX)
################################################################################
dnl -- Check for selinux
if test x$SELINUX = xyes; then
AC_CHECK_LIB(sepol, sepol_check_context, HAVE_SEPOL=yes, HAVE_SEPOL=no)
if test x$HAVE_SEPOL = xyes; then
AC_DEFINE([HAVE_SEPOL], 1,
[Define to 1 if sepol_check_context is available.])
LIBS="-lsepol $LIBS"
fi
AC_CHECK_LIB(selinux, is_selinux_enabled, HAVE_SELINUX=yes, HAVE_SELINUX=no)
if test x$HAVE_SELINUX = xyes; then
AC_DEFINE([HAVE_SELINUX], 1, [Define to 1 to include support for selinux.])
LIBS="-lselinux $LIBS"
else
AC_MSG_WARN(Disabling selinux)
fi
# With --enable-static_link and selinux enabled, linking
# fails on at least Debian unstable due to unsatisfied references
# to pthread_mutex_lock and _unlock. See if we need -lpthread.
if test "$STATIC_LINK-$HAVE_SELINUX" = yes-yes; then
lvm_saved_libs=$LIBS
LIBS="$LIBS -static"
AC_SEARCH_LIBS([pthread_mutex_lock], [pthread],
[test "$ac_cv_search_pthread_mutex_lock" = "none required" ||
LIB_PTHREAD=-lpthread])
LIBS=$lvm_saved_libs
fi
fi
################################################################################
dnl -- Check for getopt
AC_CHECK_HEADERS(getopt.h, AC_DEFINE([HAVE_GETOPTLONG], 1, [Define to 1 if getopt_long is available.]))
################################################################################
dnl -- Check for readline (Shamelessly copied from parted 1.4.17)
if test x$READLINE = xyes; then
AC_CHECK_LIB(readline, readline, ,
AC_MSG_ERROR(
GNU Readline could not be found which is required for the
--enable-readline option (which is enabled by default). Either disable readline
support with --disable-readline or download and install readline from:
ftp.gnu.org/gnu/readline
Note: if you are using precompiled packages you will also need the development
package as well (which may be called readline-devel or something similar).
)
)
AC_CHECK_FUNC(rl_completion_matches, AC_DEFINE([HAVE_RL_COMPLETION_MATCHES], 1, [Define to 1 if rl_completion_matches() is available.]))
fi
################################################################################
dnl -- Internationalisation stuff
AC_MSG_CHECKING(whether to enable internationalisation)
AC_ARG_ENABLE(nls, [ --enable-nls Enable Native Language Support],
INTL=$enableval, INTL=no)
AC_MSG_RESULT($INTL)
if test x$INTL = xyes; then
INTL_PACKAGE="device-mapper"
AC_PATH_PROG(MSGFMT, msgfmt)
if [[ "x$MSGFMT" == x ]];
then AC_MSG_ERROR(
msgfmt not found in path $PATH
)
fi;
AC_ARG_WITH(localedir,
[ --with-localedir=DIR Translation files in DIR [[PREFIX/share/locale]] ],
[ LOCALEDIR="$withval" ],
[ LOCALEDIR='${prefix}/share/locale' ])
fi
################################################################################
dnl -- Where the linux src tree is
AC_MSG_CHECKING(for kernel directory)
AC_ARG_WITH(kerneldir,
[ --with-kernel-dir=DIR linux kernel source in DIR []],
[ kerneldir="$withval" ] )
if test "${with_kernel_dir+set}" = set; then
kerneldir="$with_kernel_dir"
fi
if test "${with_kernel-dir+set}" = set; then
kerneldir="$with_kerneldir"
fi
if test "${with_kernel-src+set}" = set; then
kerneldir="$with_kernel-src"
fi
if test "${with_kernel_src+set}" = set; then
kerneldir="$with_kernel_src"
fi
if test "${with_kernel+set}" = set; then
kerneldir="$with_kernel"
fi
AC_MSG_RESULT($kerneldir)
if test "x${kerneldir}" = x; then
missingkernel=yes
else
test -d "${kerneldir}" || { AC_MSG_WARN(kernel dir $kerneldir not found); missingkernel=yes ; }
fi
################################################################################
dnl -- Kernel version string
AC_MSG_CHECKING(for kernel version)
AC_ARG_WITH(kernel-version,
[ --with-kernel-version=VERSION linux kernel version] )
if test "${with_kernel-version+set}" = set; then
kernelvsn="$with_kernel-version"
fi
if test "${with_kernelvsn+set}" = set; then
kernelvsn="$with_kernelvsn"
fi
if test "${with_kernel_version+set}" = set; then
kernelvsn="$with_kernel_version"
fi
if test "${with_kernelversion+set}" = set; then
kernelvsn="$with_kernelversion"
fi
if test "x${kernelvsn}" = x; then
if test "x${missingkernel}" = "x"; then
kernelvsn=`awk -F ' = ' '/^VERSION/ {v=$2} /^PATCH/ {p=$2} /^SUBLEVEL/ {s=$2} /^EXTRAVERSION/ {e=$2} END {printf "%d.%d.%d%s",v,p,s,e}' $kerneldir/Makefile`
else
kernelvsn="UNKNOWN"
fi
fi
AC_MSG_RESULT($kernelvsn)
################################################################################
dnl -- Temporary directory for kernel diffs
AC_ARG_WITH(tmp-dir,
[ --with-tmp-dir=DIR temp dir to make kernel patches [[/tmp/kerndiff]] ],
[ tmpdir="$withval" ],
[ tmpdir=/tmp/kerndiff ])
if test "${with_tmp_dir+set}" = set; then
tmpdir="$with_tmp_dir"
fi
if test "${with_tmpdir+set}" = set; then
tmpdir="$with_tmpdir"
fi
################################################################################
dnl -- which kernel interface to use (ioctl only)
AC_MSG_CHECKING(for kernel interface choice)
AC_ARG_WITH(interface,
[ --with-interface=IFACE Choose kernel interface (ioctl) [[ioctl]] ],
[ interface="$withval" ],
[ interface=ioctl ])
if [[ "x$interface" != xioctl ]];
then
AC_MSG_ERROR(--with-interface=ioctl required. fs no longer supported.)
fi
AC_MSG_RESULT($interface)
DM_LIB_VERSION="\"`cat VERSION 2>/dev/null || echo Unknown`\""
AC_DEFINE_UNQUOTED(DM_LIB_VERSION, $DM_LIB_VERSION, [Library version])
################################################################################
dnl -- dmeventd pidfile and executable path
AH_TEMPLATE(DMEVENTD_PIDFILE, [Path to dmeventd pidfile.])
if test "$BUILD_DMEVENTD" = yes; then
AC_ARG_WITH(dmeventd-pidfile,
[ --with-dmeventd-pidfile=PATH dmeventd pidfile [[/var/run/dmeventd.pid]] ],
[ AC_DEFINE_UNQUOTED(DMEVENTD_PIDFILE,"$withval") ],
[ AC_DEFINE_UNQUOTED(DMEVENTD_PIDFILE,"/var/run/dmeventd.pid") ])
fi
AH_TEMPLATE(DMEVENTD_PATH, [Path to dmeventd binary.])
if test "$BUILD_DMEVENTD" = yes; then
dmeventd_prefix="$exec_prefix"
if test "x$dmeventd_prefix" = "xNONE"; then
dmeventd_prefix="$prefix"
fi
if test "x$dmeventd_prefix" = "xNONE"; then
dmeventd_prefix=""
fi
AC_ARG_WITH(dmeventd-path,
[ --with-dmeventd-path=PATH dmeventd path [[${exec_prefix}/sbin/dmeventd]] ],
[ AC_DEFINE_UNQUOTED(DMEVENTD_PATH,"$withval") ],
[ AC_DEFINE_UNQUOTED(DMEVENTD_PATH,"$dmeventd_prefix/sbin/dmeventd") ])
fi
################################################################################
AC_SUBST(BUILD_DMEVENTD)
AC_SUBST(CFLAGS)
AC_SUBST(CFLOW_CMD)
AC_SUBST(CLDFLAGS)
AC_SUBST(CLDNOWHOLEARCHIVE)
AC_SUBST(CLDWHOLEARCHIVE)
AC_SUBST(CLUSTER)
AC_SUBST(CLVMD)
AC_SUBST(CMDLIB)
AC_SUBST(COPTIMISE_FLAG)
AC_SUBST(CSCOPE_CMD)
AC_SUBST(DEBUG)
AC_SUBST(DEVMAPPER)
AC_SUBST(DMDIR)
AC_SUBST(DM_COMPAT)
AC_SUBST(DM_DEVICE_GID)
AC_SUBST(DM_DEVICE_MODE)
AC_SUBST(DM_DEVICE_UID)
AC_SUBST(DM_IOCTLS)
AC_SUBST(DM_LIB_VERSION)
AC_SUBST(FSADM)
AC_SUBST(GROUP)
AC_SUBST(HAVE_LIBDL)
AC_SUBST(HAVE_REALTIME)
AC_SUBST(HAVE_SELINUX)
AC_SUBST(INTL)
AC_SUBST(INTL_PACKAGE)
AC_SUBST(JOBS)
AC_SUBST(LDDEPS)
AC_SUBST(LIBS)
AC_SUBST(LIB_SUFFIX)
AC_SUBST(LOCALEDIR)
AC_SUBST(LVM1)
AC_SUBST(LVM1_FALLBACK)
AC_SUBST(LVM_CONF_DIR)
AC_SUBST(LVM_VERSION)
AC_SUBST(MIRRORS)
AC_SUBST(MSGFMT)
AC_SUBST(OWNER)
AC_SUBST(PKGCONFIG)
AC_SUBST(POOL)
AC_SUBST(SNAPSHOTS)
AC_SUBST(STATICDIR)
AC_SUBST(STATIC_LINK)
AC_SUBST([LIB_PTHREAD])
AC_SUBST(interface)
AC_SUBST(kerneldir)
AC_SUBST(missingkernel)
AC_SUBST(kernelvsn)
AC_SUBST(tmpdir)
AC_SUBST(usrlibdir)
################################################################################
dnl -- First and last lines should not contain files to generate in order to
dnl -- keep utility scripts running properly
AC_CONFIG_FILES([\
Makefile \
make.tmpl \
include/Makefile \
dmsetup/Makefile \
lib/Makefile \
lib/libdevmapper.pc \
dmeventd/Makefile \
dmeventd/libdevmapper-event.pc \
kernel/Makefile \
man/Makefile \
po/Makefile \
])
AC_OUTPUT
if test "x${kerneldir}" != "x" ; then
if test -d "${kerneldir}"; then
if test ! -f "${kerneldir}/include/linux/dm-ioctl.h"; then
AC_MSG_WARN(Your kernel source in ${kerneldir} needs patching)
if test "x${kernelvsn}" != "xUNKNOWN"; then
AC_MSG_WARN([For supported kernels, try 'make apply-patches' next to do this, or apply the
device-mapper patches by hand.
])
fi
fi
else
AC_MSG_WARN(kernel directory $kerneldir not found)
fi
if test "x${kernelvsn}" = "xUNKNOWN"; then
AC_MSG_WARN([kernel version not detected: 'make apply-patches' won't work.
If your kernel already contains device-mapper it may be OK,
otherwise you'll need to apply the device-mapper patches by hand.
])
fi
fi

View File

@ -0,0 +1,13 @@
Sure. It's basically code from dm-linear.c and dm-stripe.c (you can
reverse chunks of a given size). The target syntax is "<chunk size>
<device> <first sector>".
I added some lines so that it can be compiled as module outside of the
kernel (the reference counting is working fine).
dm-reverse.c is attached.
--
Christophe Saout <christophe@saout.de>
Please avoid sending me Word or PowerPoint attachments.
See http://www.fsf.org/philosophy/no-word-attachments.html

View File

@ -0,0 +1,173 @@
/*
* Copyright (C) 2001 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include "dm.h"
#include <linux/module.h>
#include <linux/init.h>
#include <linux/blkdev.h>
#include <linux/bio.h>
#include <linux/slab.h>
MODULE_AUTHOR("Christophe Saout <christophe@saout.de>");
MODULE_DESCRIPTION(DM_NAME " target for reverse block mapping");
MODULE_LICENSE("GPL");
struct reverse_c {
uint32_t chunk_shift;
sector_t chunk_mask;
sector_t chunk_last;
struct dm_dev *dev;
sector_t start;
};
/*
* Construct a reverse mapping.
* <chunk size (2^^n)> <dev_path> <offset>
*/
static int reverse_ctr(struct dm_target *ti, unsigned int argc, char **argv)
{
struct reverse_c *rc;
uint32_t chunk_size;
sector_t chunk_count;
char *end;
if (argc != 3) {
ti->error = "dm-reverse: Not enough arguments";
return -EINVAL;
}
chunk_size = simple_strtoul(argv[0], &end, 10);
if (*end) {
ti->error = "dm-reverse: Invalid chunk_size";
return -EINVAL;
}
/*
* chunk_size is a power of two
*/
if (!chunk_size || (chunk_size & (chunk_size - 1))) {
ti->error = "dm-reverse: Invalid chunk size";
return -EINVAL;
}
/*
* How do we handle a small last <-> first chunk?
* We simply don't... so
* length has to be a multiple of the chunk size
*/
chunk_count = ti->len;
if (sector_div(chunk_count, chunk_size) > 0) {
ti->error = "dm-reverse: Size must be a multiple of the chunk size";
return -EINVAL;
}
rc = kmalloc(sizeof(*rc), GFP_KERNEL);
if (rc == NULL) {
ti->error = "dm-reverse: Cannot allocate reverse context ";
return -ENOMEM;
}
if (sscanf(argv[2], SECTOR_FORMAT, &rc->start) != 1) {
ti->error = "dm-reverse: Invalid device sector";
goto bad;
}
if (dm_get_device(ti, argv[1], rc->start, ti->len,
dm_table_get_mode(ti->table), &rc->dev)) {
ti->error = "dm-reverse: Device lookup failed";
goto bad;
}
ti->split_io = chunk_size;
rc->chunk_last = chunk_count - 1;
rc->chunk_mask = ((sector_t) chunk_size) - 1;
for (rc->chunk_shift = 0; chunk_size; rc->chunk_shift++)
chunk_size >>= 1;
rc->chunk_shift--;
ti->private = rc;
return 0;
bad:
kfree(rc);
return -EINVAL;
}
static void reverse_dtr(struct dm_target *ti)
{
struct reverse_c *rc = (struct reverse_c *) ti->private;
dm_put_device(ti, rc->dev);
kfree(rc);
}
static int reverse_map(struct dm_target *ti, struct bio *bio)
{
struct reverse_c *rc = (struct reverse_c *) ti->private;
sector_t offset = bio->bi_sector - ti->begin;
uint32_t chunk = (uint32_t) (offset >> rc->chunk_shift);
chunk = rc->chunk_last - chunk;
bio->bi_bdev = rc->dev->bdev;
bio->bi_sector = rc->start + (chunk << rc->chunk_shift)
+ (offset & rc->chunk_mask);
return 1;
}
static int reverse_status(struct dm_target *ti,
status_type_t type, char *result, unsigned int maxlen)
{
struct reverse_c *rc = (struct reverse_c *) ti->private;
char b[BDEVNAME_SIZE];
switch (type) {
case STATUSTYPE_INFO:
result[0] = '\0';
break;
case STATUSTYPE_TABLE:
snprintf(result, maxlen, SECTOR_FORMAT " %s " SECTOR_FORMAT,
rc->chunk_mask + 1, bdevname(rc->dev->bdev, b), rc->start);
break;
}
return 0;
}
static struct target_type reverse_target = {
.name = "reverse",
.module = THIS_MODULE,
.ctr = reverse_ctr,
.dtr = reverse_dtr,
.map = reverse_map,
.status = reverse_status,
};
int __init dm_reverse_init(void)
{
int r;
r = dm_register_target(&reverse_target);
if (r < 0)
DMWARN("reverse target registration failed");
return r;
}
void dm_reverse_exit(void)
{
if (dm_unregister_target(&reverse_target))
DMWARN("reverse target unregistration failed");
return;
}
module_init(dm_reverse_init)
module_exit(dm_reverse_exit)

View File

@ -0,0 +1,86 @@
devmapper (0.96.07-1) unstable; urgency=low
* New upstream version. (Closes: #171671)
* Char signedness assumption fixed. (Closes: #163825)
* Remove types.h inclusion fix from 2.4.19 kernel patch; committed upstream.
* debian/copyright fix to appease lintian.
-- Andres Salomon <dilinger@mp3revolution.net> Mon, 9 Dec 2002 02:16:28 -0400
devmapper (0.96.04-2) unstable; urgency=low
* Make the new version of dh-kpatches happy. (Closes: #160927)
* Make header-update makefile rule consistent w/ my other packages, and
update headers for good measure.
-- Andres Salomon <dilinger@mp3revolution.net> Sat, 21 Sep 2002 17:29:07 -0400
devmapper (0.96.04-1) unstable; urgency=low
* New upstream release (Beta5).
* Update kernel headers to 2.4.19.
* Update kpatch to 2.4.19.
-- Andres Salomon <dilinger@mp3revolution.net> Thu, 15 Aug 2002 00:26:20 -0400
devmapper (0.95.07-3) unstable; urgency=low
* Move libdevmapper0 libs to /lib. (Closes: #146237)
* Remove dependency on fileutils, to shut lintian up.
-- Andres Salomon <dilinger@mp3revolution.net> Sun, 12 May 2002 03:20:54 -0500
devmapper (0.95.07-2) unstable; urgency=low
* Fix link error on hppa, due to lack of -fPIC. (Closes: #144792)
* Fix postinst error in libdevmapper0. (Closes: #144889)
* Updated depends (removed patch, bzip2, added modutils, fileutils).
-- Andres Salomon <dilinger@mp3revolution.net> Sun, 28 Apr 2002 14:26:59 -0500
devmapper (0.95.07-1) unstable; urgency=low
* New release (Beta2).
* Remove 2.4.16 and 2.4.17 patches from kpatches.
* Reworked the build system to supply its own headers, instead of
depending upon kernel-source packages. Makes building much faster.
* Added scripts/ directory, and scripts to keep kernel headers up-to-date.
-- Andres Salomon <dilinger@mp3revolution.net> Thu, 25 Apr 2002 01:01:41 -0500
devmapper (0.95.06-1) unstable; urgency=low
* New release.
-- Andres Salomon <dilinger@mp3revolution.net> Wed, 3 Apr 2002 00:02:12 -0500
devmapper (0.95.05-1) unstable; urgency=low
* New release; ext3 support and 2.4.18 patches now included.
* Drop the cvs<date> suffix from version.
-- Andres Salomon <dilinger@mp3revolution.net> Fri, 15 Mar 2002 01:03:25 -0500
devmapper (0.95.03cvs20020306-1) unstable; urgency=low
* New Release.
* Convert from debian native package.
-- Andres Salomon <dilinger@mp3revolution.net> Wed, 6 Mar 2002 00:29:39 -0500
devmapper (0.95.02cvs20020304) unstable; urgency=low
* CVS update.
* Renamed libdevmapper package to libdevmapper0.
* Added postinst script for creating devmapper control device.
-- Andres Salomon <dilinger@mp3revolution.net> Mon, 4 Mar 2002 02:23:48 -0500
devmapper (0.95.02cvs20020218) unstable; urgency=low
* Initial Release.
* device-mapper broken up into libdevmapper1, libdevmapper-dev,
dmsetup, and kernel-patch-device-mapper.
-- Andres Salomon <dilinger@mp3revolution.net> Mon, 18 Feb 2002 15:46:08 -0500

View File

@ -0,0 +1,64 @@
Source: devmapper
Section: admin
Priority: optional
Maintainer: Andres Salomon <dilinger@mp3revolution.net>
Build-Depends: debhelper (>> 3.0.0), dh-kpatches
Standards-Version: 3.5.2
Package: kernel-patch-device-mapper
Section: devel
Architecture: any
Depends: ${kpatch:Depends}
Suggests: libdevmapper0, kernel-source-2.4.19
Description: The Linux Kernel Device Mapper kernel patch
The Linux Kernel Device Mapper is the LVM (Linux Logical Volume Management)
Team's implementation of a minimalistic kernel-space driver that handles
volume management, while keeping knowledge of the underlying device layout
in kernel space. This makes it useful for not only LVM, but EVMS, software
raid, and other drivers that create "virtual" block devices.
.
This package contains the kernel patch for the device-mapper.
Package: libdevmapper-dev
Section: devel
Architecture: any
Depends: libdevmapper0 (= ${Source-Version}), libc6-dev
Description: The Linux Kernel Device Mapper header files
The Linux Kernel Device Mapper is the LVM (Linux Logical Volume Management)
Team's implementation of a minimalistic kernel-space driver that handles
volume management, while keeping knowledge of the underlying device layout
in kernel space. This makes it useful for not only LVM, but EVMS, software
raid, and other drivers that create "virtual" block devices.
.
This package contains the (user-space) header files for accessing the
device-mapper; it allow usage of the device-mapper through a clean,
consistent interface (as opposed to through kernel ioctls).
Package: libdevmapper0
Section: libs
Architecture: any
Depends: ${shlibs:Depends}, modutils
Provides: libdevmapper
Description: The Linux Kernel Device Mapper userspace library
The Linux Kernel Device Mapper is the LVM (Linux Logical Volume Management)
Team's implementation of a minimalistic kernel-space driver that handles
volume management, while keeping knowledge of the underlying device layout
in kernel space. This makes it useful for not only LVM, but EVMS, software
raid, and other drivers that create "virtual" block devices.
.
This package contains the (user-space) shared library for accessing the
device-mapper; it allows usage of the device-mapper through a clean,
consistent interface (as opposed to through kernel ioctls).
Package: dmsetup
Section: admin
Architecture: any
Depends: ${shlibs:Depends}
Description: The Linux Kernel Device Mapper userspace library
The Linux Kernel Device Mapper is the LVM (Linux Logical Volume Management)
Team's implementation of a minimalistic kernel-space driver that handles
volume management, while keeping knowledge of the underlying device layout
in kernel space. This makes it useful for not only LVM, but EVMS, software
raid, and other drivers that create "virtual" block devices.
.
This package contains a utility for modifying device mappings.

View File

@ -0,0 +1,25 @@
This package was debianized by Andres Salomon <dilinger@mp3revolution.net> on
Mon, 18 Feb 2002 15:46:08 -0500.
It was downloaded from http://www.sistina.com/products_lvm.htm
Upstream Author: LVM Development Team
Copyright (c) 2001-2002 LVM Development Team
device-mapper is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
device-mapper is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
On Debian systems, the full text of the GPL can be found in
/usr/share/common-licenses/GPL

View File

@ -0,0 +1,2 @@
usr/lib
usr/include

View File

@ -0,0 +1,2 @@
usr/include/*
usr/lib/lib*.a

View File

@ -0,0 +1,2 @@
usr/sbin
usr/share/man/man8

View File

@ -0,0 +1,2 @@
INTRO
README

View File

@ -0,0 +1 @@
usr/sbin/dmsetup

View File

@ -0,0 +1 @@
debian/tmp/usr/share/man/man8/dmsetup.8

View File

@ -0,0 +1,63 @@
/*
* Copyright (C) 2001 Sistina Software (UK) Limited.
*
* This file is released under the LGPL.
*/
#ifndef _LINUX_DEVICE_MAPPER_H
#define _LINUX_DEVICE_MAPPER_H
#define DM_DIR "mapper" /* Slashes not supported */
#define DM_MAX_TYPE_NAME 16
#define DM_NAME_LEN 128
#define DM_UUID_LEN 129
#ifdef __KERNEL__
struct dm_table;
struct dm_dev;
typedef unsigned long offset_t;
typedef enum { STATUSTYPE_INFO, STATUSTYPE_TABLE } status_type_t;
/*
* Prototypes for functions for a target
*/
typedef int (*dm_ctr_fn) (struct dm_table *t, offset_t b, offset_t l,
int argc, char **argv, void **context);
typedef void (*dm_dtr_fn) (struct dm_table *t, void *c);
typedef int (*dm_map_fn) (struct buffer_head *bh, int rw, void *context);
typedef int (*dm_err_fn) (struct buffer_head *bh, int rw, void *context);
typedef int (*dm_status_fn) (status_type_t status_type, char *result,
int maxlen, void *context);
void dm_error(const char *message);
/*
* Constructors should call these functions to ensure destination devices
* are opened/closed correctly
*/
int dm_table_get_device(struct dm_table *t, const char *path,
offset_t start, offset_t len,
int mode, struct dm_dev **result);
void dm_table_put_device(struct dm_table *table, struct dm_dev *d);
/*
* Information about a target type
*/
struct target_type {
const char *name;
struct module *module;
dm_ctr_fn ctr;
dm_dtr_fn dtr;
dm_map_fn map;
dm_err_fn err;
dm_status_fn status;
};
int dm_register_target(struct target_type *t);
int dm_unregister_target(struct target_type *t);
#endif /* __KERNEL__ */
#endif /* _LINUX_DEVICE_MAPPER_H */

View File

@ -0,0 +1,145 @@
/*
* Copyright (C) 2001 Sistina Software (UK) Limited.
*
* This file is released under the LGPL.
*/
#ifndef _LINUX_DM_IOCTL_H
#define _LINUX_DM_IOCTL_H
#include "device-mapper.h"
#include <linux/types.h>
/*
* Implements a traditional ioctl interface to the device mapper.
*/
/*
* All ioctl arguments consist of a single chunk of memory, with
* this structure at the start. If a uuid is specified any
* lookup (eg. for a DM_INFO) will be done on that, *not* the
* name.
*/
struct dm_ioctl {
/*
* The version number is made up of three parts:
* major - no backward or forward compatibility,
* minor - only backwards compatible,
* patch - both backwards and forwards compatible.
*
* All clients of the ioctl interface should fill in the
* version number of the interface that they were
* compiled with.
*
* All recognised ioctl commands (ie. those that don't
* return -ENOTTY) fill out this field, even if the
* command failed.
*/
uint32_t version[3]; /* in/out */
uint32_t data_size; /* total size of data passed in
* including this struct */
uint32_t data_start; /* offset to start of data
* relative to start of this struct */
uint32_t target_count; /* in/out */
uint32_t open_count; /* out */
uint32_t flags; /* in/out */
__kernel_dev_t dev; /* in/out */
char name[DM_NAME_LEN]; /* device name */
char uuid[DM_UUID_LEN]; /* unique identifier for
* the block device */
};
/*
* Used to specify tables. These structures appear after the
* dm_ioctl.
*/
struct dm_target_spec {
int32_t status; /* used when reading from kernel only */
uint64_t sector_start;
uint32_t length;
/*
* Offset in bytes (from the start of this struct) to
* next target_spec.
*/
uint32_t next;
char target_type[DM_MAX_TYPE_NAME];
/*
* Parameter string starts immediately after this object.
* Be careful to add padding after string to ensure correct
* alignment of subsequent dm_target_spec.
*/
};
/*
* Used to retrieve the target dependencies.
*/
struct dm_target_deps {
uint32_t count;
__kernel_dev_t dev[0]; /* out */
};
/*
* If you change this make sure you make the corresponding change
* to dm-ioctl.c:lookup_ioctl()
*/
enum {
/* Top level cmds */
DM_VERSION_CMD = 0,
DM_REMOVE_ALL_CMD,
/* device level cmds */
DM_DEV_CREATE_CMD,
DM_DEV_REMOVE_CMD,
DM_DEV_RELOAD_CMD,
DM_DEV_RENAME_CMD,
DM_DEV_SUSPEND_CMD,
DM_DEV_DEPS_CMD,
DM_DEV_STATUS_CMD,
/* target level cmds */
DM_TARGET_STATUS_CMD,
DM_TARGET_WAIT_CMD
};
#define DM_IOCTL 0xfd
#define DM_VERSION _IOWR(DM_IOCTL, DM_VERSION_CMD, struct dm_ioctl)
#define DM_REMOVE_ALL _IOWR(DM_IOCTL, DM_REMOVE_ALL_CMD, struct dm_ioctl)
#define DM_DEV_CREATE _IOWR(DM_IOCTL, DM_DEV_CREATE_CMD, struct dm_ioctl)
#define DM_DEV_REMOVE _IOWR(DM_IOCTL, DM_DEV_REMOVE_CMD, struct dm_ioctl)
#define DM_DEV_RELOAD _IOWR(DM_IOCTL, DM_DEV_RELOAD_CMD, struct dm_ioctl)
#define DM_DEV_SUSPEND _IOWR(DM_IOCTL, DM_DEV_SUSPEND_CMD, struct dm_ioctl)
#define DM_DEV_RENAME _IOWR(DM_IOCTL, DM_DEV_RENAME_CMD, struct dm_ioctl)
#define DM_DEV_DEPS _IOWR(DM_IOCTL, DM_DEV_DEPS_CMD, struct dm_ioctl)
#define DM_DEV_STATUS _IOWR(DM_IOCTL, DM_DEV_STATUS_CMD, struct dm_ioctl)
#define DM_TARGET_STATUS _IOWR(DM_IOCTL, DM_TARGET_STATUS_CMD, struct dm_ioctl)
#define DM_TARGET_WAIT _IOWR(DM_IOCTL, DM_TARGET_WAIT_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 1
#define DM_VERSION_MINOR 0
#define DM_VERSION_PATCHLEVEL 3
#define DM_VERSION_EXTRA "-ioctl (2002-08-14)"
/* Status bits */
#define DM_READONLY_FLAG 0x00000001
#define DM_SUSPEND_FLAG 0x00000002
#define DM_EXISTS_FLAG 0x00000004
#define DM_PERSISTENT_DEV_FLAG 0x00000008
/*
* Flag passed into ioctl STATUS command to get table information
* rather than current status.
*/
#define DM_STATUS_TABLE_FLAG 0x00000010
#endif /* _LINUX_DM_IOCTL_H */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,881 @@
/*
* linux/include/linux/jbd.h
*
* Written by Stephen C. Tweedie <sct@redhat.com>
*
* Copyright 1998-2000 Red Hat, Inc --- All Rights Reserved
*
* This file is part of the Linux kernel and is made available under
* the terms of the GNU General Public License, version 2, or at your
* option, any later version, incorporated herein by reference.
*
* Definitions for transaction data structures for the buffer cache
* filesystem journaling support.
*/
#ifndef _LINUX_JBD_H
#define _LINUX_JBD_H
#if defined(CONFIG_JBD) || defined(CONFIG_JBD_MODULE) || !defined(__KERNEL__)
/* Allow this file to be included directly into e2fsprogs */
#ifndef __KERNEL__
#include "jfs_compat.h"
#define JFS_DEBUG
#define jfs_debug jbd_debug
#else
#include <linux/journal-head.h>
#include <linux/stddef.h>
#include <asm/semaphore.h>
#endif
#define journal_oom_retry 1
#ifdef CONFIG_JBD_DEBUG
/*
* Define JBD_EXPENSIVE_CHECKING to enable more expensive internal
* consistency checks. By default we don't do this unless
* CONFIG_JBD_DEBUG is on.
*/
#define JBD_EXPENSIVE_CHECKING
extern int journal_enable_debug;
#define jbd_debug(n, f, a...) \
do { \
if ((n) <= journal_enable_debug) { \
printk (KERN_DEBUG "(%s, %d): %s: ", \
__FILE__, __LINE__, __FUNCTION__); \
printk (f, ## a); \
} \
} while (0)
#else
#define jbd_debug(f, a...) /**/
#endif
extern void * __jbd_kmalloc (char *where, size_t size, int flags, int retry);
#define jbd_kmalloc(size, flags) \
__jbd_kmalloc(__FUNCTION__, (size), (flags), journal_oom_retry)
#define jbd_rep_kmalloc(size, flags) \
__jbd_kmalloc(__FUNCTION__, (size), (flags), 1)
#define JFS_MIN_JOURNAL_BLOCKS 1024
#ifdef __KERNEL__
typedef struct handle_s handle_t; /* Atomic operation type */
typedef struct journal_s journal_t; /* Journal control structure */
#endif
/*
* Internal structures used by the logging mechanism:
*/
#define JFS_MAGIC_NUMBER 0xc03b3998U /* The first 4 bytes of /dev/random! */
/*
* On-disk structures
*/
/*
* Descriptor block types:
*/
#define JFS_DESCRIPTOR_BLOCK 1
#define JFS_COMMIT_BLOCK 2
#define JFS_SUPERBLOCK_V1 3
#define JFS_SUPERBLOCK_V2 4
#define JFS_REVOKE_BLOCK 5
/*
* Standard header for all descriptor blocks:
*/
typedef struct journal_header_s
{
__u32 h_magic;
__u32 h_blocktype;
__u32 h_sequence;
} journal_header_t;
/*
* The block tag: used to describe a single buffer in the journal
*/
typedef struct journal_block_tag_s
{
__u32 t_blocknr; /* The on-disk block number */
__u32 t_flags; /* See below */
} journal_block_tag_t;
/*
* The revoke descriptor: used on disk to describe a series of blocks to
* be revoked from the log
*/
typedef struct journal_revoke_header_s
{
journal_header_t r_header;
int r_count; /* Count of bytes used in the block */
} journal_revoke_header_t;
/* Definitions for the journal tag flags word: */
#define JFS_FLAG_ESCAPE 1 /* on-disk block is escaped */
#define JFS_FLAG_SAME_UUID 2 /* block has same uuid as previous */
#define JFS_FLAG_DELETED 4 /* block deleted by this transaction */
#define JFS_FLAG_LAST_TAG 8 /* last tag in this descriptor block */
/*
* The journal superblock. All fields are in big-endian byte order.
*/
typedef struct journal_superblock_s
{
/* 0x0000 */
journal_header_t s_header;
/* 0x000C */
/* Static information describing the journal */
__u32 s_blocksize; /* journal device blocksize */
__u32 s_maxlen; /* total blocks in journal file */
__u32 s_first; /* first block of log information */
/* 0x0018 */
/* Dynamic information describing the current state of the log */
__u32 s_sequence; /* first commit ID expected in log */
__u32 s_start; /* blocknr of start of log */
/* 0x0020 */
/* Error value, as set by journal_abort(). */
__s32 s_errno;
/* 0x0024 */
/* Remaining fields are only valid in a version-2 superblock */
__u32 s_feature_compat; /* compatible feature set */
__u32 s_feature_incompat; /* incompatible feature set */
__u32 s_feature_ro_compat; /* readonly-compatible feature set */
/* 0x0030 */
__u8 s_uuid[16]; /* 128-bit uuid for journal */
/* 0x0040 */
__u32 s_nr_users; /* Nr of filesystems sharing log */
__u32 s_dynsuper; /* Blocknr of dynamic superblock copy*/
/* 0x0048 */
__u32 s_max_transaction; /* Limit of journal blocks per trans.*/
__u32 s_max_trans_data; /* Limit of data blocks per trans. */
/* 0x0050 */
__u32 s_padding[44];
/* 0x0100 */
__u8 s_users[16*48]; /* ids of all fs'es sharing the log */
/* 0x0400 */
} journal_superblock_t;
#define JFS_HAS_COMPAT_FEATURE(j,mask) \
((j)->j_format_version >= 2 && \
((j)->j_superblock->s_feature_compat & cpu_to_be32((mask))))
#define JFS_HAS_RO_COMPAT_FEATURE(j,mask) \
((j)->j_format_version >= 2 && \
((j)->j_superblock->s_feature_ro_compat & cpu_to_be32((mask))))
#define JFS_HAS_INCOMPAT_FEATURE(j,mask) \
((j)->j_format_version >= 2 && \
((j)->j_superblock->s_feature_incompat & cpu_to_be32((mask))))
#define JFS_FEATURE_INCOMPAT_REVOKE 0x00000001
/* Features known to this kernel version: */
#define JFS_KNOWN_COMPAT_FEATURES 0
#define JFS_KNOWN_ROCOMPAT_FEATURES 0
#define JFS_KNOWN_INCOMPAT_FEATURES JFS_FEATURE_INCOMPAT_REVOKE
#ifdef __KERNEL__
#include <linux/fs.h>
#include <linux/sched.h>
#define JBD_ASSERTIONS
#ifdef JBD_ASSERTIONS
#define J_ASSERT(assert) \
do { \
if (!(assert)) { \
printk (KERN_EMERG \
"Assertion failure in %s() at %s:%d: \"%s\"\n", \
__FUNCTION__, __FILE__, __LINE__, # assert); \
BUG(); \
} \
} while (0)
#if defined(CONFIG_BUFFER_DEBUG)
void buffer_assertion_failure(struct buffer_head *bh);
#define J_ASSERT_BH(bh, expr) \
do { \
if (!(expr)) \
buffer_assertion_failure(bh); \
J_ASSERT(expr); \
} while (0)
#define J_ASSERT_JH(jh, expr) J_ASSERT_BH(jh2bh(jh), expr)
#else
#define J_ASSERT_BH(bh, expr) J_ASSERT(expr)
#define J_ASSERT_JH(jh, expr) J_ASSERT(expr)
#endif
#else
#define J_ASSERT(assert) do { } while (0)
#endif /* JBD_ASSERTIONS */
enum jbd_state_bits {
BH_JWrite
= BH_PrivateStart, /* 1 if being written to log (@@@ DEBUGGING) */
BH_Freed, /* 1 if buffer has been freed (truncated) */
BH_Revoked, /* 1 if buffer has been revoked from the log */
BH_RevokeValid, /* 1 if buffer revoked flag is valid */
BH_JBDDirty, /* 1 if buffer is dirty but journaled */
};
/* Return true if the buffer is one which JBD is managing */
static inline int buffer_jbd(struct buffer_head *bh)
{
return __buffer_state(bh, JBD);
}
static inline struct buffer_head *jh2bh(struct journal_head *jh)
{
return jh->b_bh;
}
static inline struct journal_head *bh2jh(struct buffer_head *bh)
{
return bh->b_journal_head;
}
struct jbd_revoke_table_s;
/* The handle_t type represents a single atomic update being performed
* by some process. All filesystem modifications made by the process go
* through this handle. Recursive operations (such as quota operations)
* are gathered into a single update.
*
* The buffer credits field is used to account for journaled buffers
* being modified by the running process. To ensure that there is
* enough log space for all outstanding operations, we need to limit the
* number of outstanding buffers possible at any time. When the
* operation completes, any buffer credits not used are credited back to
* the transaction, so that at all times we know how many buffers the
* outstanding updates on a transaction might possibly touch. */
struct handle_s
{
/* Which compound transaction is this update a part of? */
transaction_t * h_transaction;
/* Number of remaining buffers we are allowed to dirty: */
int h_buffer_credits;
/* Reference count on this handle */
int h_ref;
/* Field for caller's use to track errors through large fs
operations */
int h_err;
/* Flags */
unsigned int h_sync: 1; /* sync-on-close */
unsigned int h_jdata: 1; /* force data journaling */
unsigned int h_aborted: 1; /* fatal error on handle */
};
/* The transaction_t type is the guts of the journaling mechanism. It
* tracks a compound transaction through its various states:
*
* RUNNING: accepting new updates
* LOCKED: Updates still running but we don't accept new ones
* RUNDOWN: Updates are tidying up but have finished requesting
* new buffers to modify (state not used for now)
* FLUSH: All updates complete, but we are still writing to disk
* COMMIT: All data on disk, writing commit record
* FINISHED: We still have to keep the transaction for checkpointing.
*
* The transaction keeps track of all of the buffers modified by a
* running transaction, and all of the buffers committed but not yet
* flushed to home for finished transactions.
*/
struct transaction_s
{
/* Pointer to the journal for this transaction. */
journal_t * t_journal;
/* Sequence number for this transaction */
tid_t t_tid;
/* Transaction's current state */
enum {
T_RUNNING,
T_LOCKED,
T_RUNDOWN,
T_FLUSH,
T_COMMIT,
T_FINISHED
} t_state;
/* Where in the log does this transaction's commit start? */
unsigned long t_log_start;
/* Doubly-linked circular list of all inodes owned by this
transaction */ /* AKPM: unused */
struct inode * t_ilist;
/* Number of buffers on the t_buffers list */
int t_nr_buffers;
/* Doubly-linked circular list of all buffers reserved but not
yet modified by this transaction */
struct journal_head * t_reserved_list;
/* Doubly-linked circular list of all metadata buffers owned by this
transaction */
struct journal_head * t_buffers;
/*
* Doubly-linked circular list of all data buffers still to be
* flushed before this transaction can be committed.
* Protected by journal_datalist_lock.
*/
struct journal_head * t_sync_datalist;
/*
* Doubly-linked circular list of all writepage data buffers
* still to be written before this transaction can be committed.
* Protected by journal_datalist_lock.
*/
struct journal_head * t_async_datalist;
/* Doubly-linked circular list of all forget buffers (superseded
buffers which we can un-checkpoint once this transaction
commits) */
struct journal_head * t_forget;
/*
* Doubly-linked circular list of all buffers still to be
* flushed before this transaction can be checkpointed.
*/
/* Protected by journal_datalist_lock */
struct journal_head * t_checkpoint_list;
/* Doubly-linked circular list of temporary buffers currently
undergoing IO in the log */
struct journal_head * t_iobuf_list;
/* Doubly-linked circular list of metadata buffers being
shadowed by log IO. The IO buffers on the iobuf list and the
shadow buffers on this list match each other one for one at
all times. */
struct journal_head * t_shadow_list;
/* Doubly-linked circular list of control buffers being written
to the log. */
struct journal_head * t_log_list;
/* Number of outstanding updates running on this transaction */
int t_updates;
/* Number of buffers reserved for use by all handles in this
* transaction handle but not yet modified. */
int t_outstanding_credits;
/*
* Forward and backward links for the circular list of all
* transactions awaiting checkpoint.
*/
/* Protected by journal_datalist_lock */
transaction_t *t_cpnext, *t_cpprev;
/* When will the transaction expire (become due for commit), in
* jiffies ? */
unsigned long t_expires;
/* How many handles used this transaction? */
int t_handle_count;
};
/* The journal_t maintains all of the journaling state information for a
* single filesystem. It is linked to from the fs superblock structure.
*
* We use the journal_t to keep track of all outstanding transaction
* activity on the filesystem, and to manage the state of the log
* writing process. */
struct journal_s
{
/* General journaling state flags */
unsigned long j_flags;
/* Is there an outstanding uncleared error on the journal (from
* a prior abort)? */
int j_errno;
/* The superblock buffer */
struct buffer_head * j_sb_buffer;
journal_superblock_t * j_superblock;
/* Version of the superblock format */
int j_format_version;
/* Number of processes waiting to create a barrier lock */
int j_barrier_count;
/* The barrier lock itself */
struct semaphore j_barrier;
/* Transactions: The current running transaction... */
transaction_t * j_running_transaction;
/* ... the transaction we are pushing to disk ... */
transaction_t * j_committing_transaction;
/* ... and a linked circular list of all transactions waiting
* for checkpointing. */
/* Protected by journal_datalist_lock */
transaction_t * j_checkpoint_transactions;
/* Wait queue for waiting for a locked transaction to start
committing, or for a barrier lock to be released */
wait_queue_head_t j_wait_transaction_locked;
/* Wait queue for waiting for checkpointing to complete */
wait_queue_head_t j_wait_logspace;
/* Wait queue for waiting for commit to complete */
wait_queue_head_t j_wait_done_commit;
/* Wait queue to trigger checkpointing */
wait_queue_head_t j_wait_checkpoint;
/* Wait queue to trigger commit */
wait_queue_head_t j_wait_commit;
/* Wait queue to wait for updates to complete */
wait_queue_head_t j_wait_updates;
/* Semaphore for locking against concurrent checkpoints */
struct semaphore j_checkpoint_sem;
/* The main journal lock, used by lock_journal() */
struct semaphore j_sem;
/* Journal head: identifies the first unused block in the journal. */
unsigned long j_head;
/* Journal tail: identifies the oldest still-used block in the
* journal. */
unsigned long j_tail;
/* Journal free: how many free blocks are there in the journal? */
unsigned long j_free;
/* Journal start and end: the block numbers of the first usable
* block and one beyond the last usable block in the journal. */
unsigned long j_first, j_last;
/* Device, blocksize and starting block offset for the location
* where we store the journal. */
kdev_t j_dev;
int j_blocksize;
unsigned int j_blk_offset;
/* Device which holds the client fs. For internal journal this
* will be equal to j_dev. */
kdev_t j_fs_dev;
/* Total maximum capacity of the journal region on disk. */
unsigned int j_maxlen;
/* Optional inode where we store the journal. If present, all
* journal block numbers are mapped into this inode via
* bmap(). */
struct inode * j_inode;
/* Sequence number of the oldest transaction in the log */
tid_t j_tail_sequence;
/* Sequence number of the next transaction to grant */
tid_t j_transaction_sequence;
/* Sequence number of the most recently committed transaction */
tid_t j_commit_sequence;
/* Sequence number of the most recent transaction wanting commit */
tid_t j_commit_request;
/* Journal uuid: identifies the object (filesystem, LVM volume
* etc) backed by this journal. This will eventually be
* replaced by an array of uuids, allowing us to index multiple
* devices within a single journal and to perform atomic updates
* across them. */
__u8 j_uuid[16];
/* Pointer to the current commit thread for this journal */
struct task_struct * j_task;
/* Maximum number of metadata buffers to allow in a single
* compound commit transaction */
int j_max_transaction_buffers;
/* What is the maximum transaction lifetime before we begin a
* commit? */
unsigned long j_commit_interval;
/* The timer used to wakeup the commit thread: */
struct timer_list * j_commit_timer;
int j_commit_timer_active;
/* Link all journals together - system-wide */
struct list_head j_all_journals;
/* The revoke table: maintains the list of revoked blocks in the
current transaction. */
struct jbd_revoke_table_s *j_revoke;
};
/*
* Journal flag definitions
*/
#define JFS_UNMOUNT 0x001 /* Journal thread is being destroyed */
#define JFS_ABORT 0x002 /* Journaling has been aborted for errors. */
#define JFS_ACK_ERR 0x004 /* The errno in the sb has been acked */
#define JFS_FLUSHED 0x008 /* The journal superblock has been flushed */
#define JFS_LOADED 0x010 /* The journal superblock has been loaded */
/*
* Function declarations for the journaling transaction and buffer
* management
*/
/* Filing buffers */
extern void __journal_unfile_buffer(struct journal_head *);
extern void journal_unfile_buffer(struct journal_head *);
extern void __journal_refile_buffer(struct journal_head *);
extern void journal_refile_buffer(struct journal_head *);
extern void __journal_file_buffer(struct journal_head *, transaction_t *, int);
extern void __journal_free_buffer(struct journal_head *bh);
extern void journal_file_buffer(struct journal_head *, transaction_t *, int);
extern void __journal_clean_data_list(transaction_t *transaction);
/* Log buffer allocation */
extern struct journal_head * journal_get_descriptor_buffer(journal_t *);
int journal_next_log_block(journal_t *, unsigned long *);
/* Commit management */
void journal_end_buffer_io_sync(struct buffer_head *bh, int uptodate);
extern void journal_commit_transaction(journal_t *);
/* Checkpoint list management */
int __journal_clean_checkpoint_list(journal_t *journal);
extern void journal_remove_checkpoint(struct journal_head *);
extern void __journal_remove_checkpoint(struct journal_head *);
extern void journal_insert_checkpoint(struct journal_head *, transaction_t *);
extern void __journal_insert_checkpoint(struct journal_head *,transaction_t *);
/* Buffer IO */
extern int
journal_write_metadata_buffer(transaction_t *transaction,
struct journal_head *jh_in,
struct journal_head **jh_out,
int blocknr);
/* Transaction locking */
extern void __wait_on_journal (journal_t *);
/*
* Journal locking.
*
* We need to lock the journal during transaction state changes so that
* nobody ever tries to take a handle on the running transaction while
* we are in the middle of moving it to the commit phase.
*
* Note that the locking is completely interrupt unsafe. We never touch
* journal structures from interrupts.
*
* In 2.2, the BKL was required for lock_journal. This is no longer
* the case.
*/
static inline void lock_journal(journal_t *journal)
{
down(&journal->j_sem);
}
/* This returns zero if we acquired the semaphore */
static inline int try_lock_journal(journal_t * journal)
{
return down_trylock(&journal->j_sem);
}
static inline void unlock_journal(journal_t * journal)
{
up(&journal->j_sem);
}
static inline handle_t *journal_current_handle(void)
{
return current->journal_info;
}
/* The journaling code user interface:
*
* Create and destroy handles
* Register buffer modifications against the current transaction.
*/
extern handle_t *journal_start(journal_t *, int nblocks);
extern handle_t *journal_try_start(journal_t *, int nblocks);
extern int journal_restart (handle_t *, int nblocks);
extern int journal_extend (handle_t *, int nblocks);
extern int journal_get_write_access (handle_t *, struct buffer_head *);
extern int journal_get_create_access (handle_t *, struct buffer_head *);
extern int journal_get_undo_access (handle_t *, struct buffer_head *);
extern int journal_dirty_data (handle_t *,
struct buffer_head *, int async);
extern int journal_dirty_metadata (handle_t *, struct buffer_head *);
extern void journal_release_buffer (handle_t *, struct buffer_head *);
extern void journal_forget (handle_t *, struct buffer_head *);
extern void journal_sync_buffer (struct buffer_head *);
extern int journal_flushpage(journal_t *, struct page *, unsigned long);
extern int journal_try_to_free_buffers(journal_t *, struct page *, int);
extern int journal_stop(handle_t *);
extern int journal_flush (journal_t *);
extern void journal_lock_updates (journal_t *);
extern void journal_unlock_updates (journal_t *);
extern journal_t * journal_init_dev(kdev_t dev, kdev_t fs_dev,
int start, int len, int bsize);
extern journal_t * journal_init_inode (struct inode *);
extern int journal_update_format (journal_t *);
extern int journal_check_used_features
(journal_t *, unsigned long, unsigned long, unsigned long);
extern int journal_check_available_features
(journal_t *, unsigned long, unsigned long, unsigned long);
extern int journal_set_features
(journal_t *, unsigned long, unsigned long, unsigned long);
extern int journal_create (journal_t *);
extern int journal_load (journal_t *journal);
extern void journal_destroy (journal_t *);
extern int journal_recover (journal_t *journal);
extern int journal_wipe (journal_t *, int);
extern int journal_skip_recovery (journal_t *);
extern void journal_update_superblock (journal_t *, int);
extern void __journal_abort_hard (journal_t *);
extern void __journal_abort_soft (journal_t *, int);
extern void journal_abort (journal_t *, int);
extern int journal_errno (journal_t *);
extern void journal_ack_err (journal_t *);
extern int journal_clear_err (journal_t *);
extern int journal_bmap(journal_t *, unsigned long, unsigned long *);
extern int journal_force_commit(journal_t *);
/*
* journal_head management
*/
extern struct journal_head
*journal_add_journal_head(struct buffer_head *bh);
extern void journal_remove_journal_head(struct buffer_head *bh);
extern void __journal_remove_journal_head(struct buffer_head *bh);
extern void journal_unlock_journal_head(struct journal_head *jh);
/* Primary revoke support */
#define JOURNAL_REVOKE_DEFAULT_HASH 256
extern int journal_init_revoke(journal_t *, int);
extern void journal_destroy_revoke_caches(void);
extern int journal_init_revoke_caches(void);
extern void journal_destroy_revoke(journal_t *);
extern int journal_revoke (handle_t *,
unsigned long, struct buffer_head *);
extern int journal_cancel_revoke(handle_t *, struct journal_head *);
extern void journal_write_revoke_records(journal_t *, transaction_t *);
/* Recovery revoke support */
extern int journal_set_revoke(journal_t *, unsigned long, tid_t);
extern int journal_test_revoke(journal_t *, unsigned long, tid_t);
extern void journal_clear_revoke(journal_t *);
extern void journal_brelse_array(struct buffer_head *b[], int n);
/* The log thread user interface:
*
* Request space in the current transaction, and force transaction commit
* transitions on demand.
*/
extern int log_space_left (journal_t *); /* Called with journal locked */
extern tid_t log_start_commit (journal_t *, transaction_t *);
extern void log_wait_commit (journal_t *, tid_t);
extern int log_do_checkpoint (journal_t *, int);
extern void log_wait_for_space(journal_t *, int nblocks);
extern void __journal_drop_transaction(journal_t *, transaction_t *);
extern int cleanup_journal_tail(journal_t *);
/* Reduce journal memory usage by flushing */
extern void shrink_journal_memory(void);
/* Debugging code only: */
#define jbd_ENOSYS() \
do { \
printk (KERN_ERR "JBD unimplemented function " __FUNCTION__); \
current->state = TASK_UNINTERRUPTIBLE; \
schedule(); \
} while (1)
/*
* is_journal_abort
*
* Simple test wrapper function to test the JFS_ABORT state flag. This
* bit, when set, indicates that we have had a fatal error somewhere,
* either inside the journaling layer or indicated to us by the client
* (eg. ext3), and that we and should not commit any further
* transactions.
*/
static inline int is_journal_aborted(journal_t *journal)
{
return journal->j_flags & JFS_ABORT;
}
static inline int is_handle_aborted(handle_t *handle)
{
if (handle->h_aborted)
return 1;
return is_journal_aborted(handle->h_transaction->t_journal);
}
static inline void journal_abort_handle(handle_t *handle)
{
handle->h_aborted = 1;
}
/* Not all architectures define BUG() */
#ifndef BUG
#define BUG() do { \
printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); \
* ((char *) 0) = 0; \
} while (0)
#endif /* BUG */
#endif /* __KERNEL__ */
/* Comparison functions for transaction IDs: perform comparisons using
* modulo arithmetic so that they work over sequence number wraps. */
static inline int tid_gt(tid_t x, tid_t y)
{
int difference = (x - y);
return (difference > 0);
}
static inline int tid_geq(tid_t x, tid_t y)
{
int difference = (x - y);
return (difference >= 0);
}
extern int journal_blocks_per_page(struct inode *inode);
/*
* Definitions which augment the buffer_head layer
*/
/* journaling buffer types */
#define BJ_None 0 /* Not journaled */
#define BJ_SyncData 1 /* Normal data: flush before commit */
#define BJ_AsyncData 2 /* writepage data: wait on it before commit */
#define BJ_Metadata 3 /* Normal journaled metadata */
#define BJ_Forget 4 /* Buffer superseded by this transaction */
#define BJ_IO 5 /* Buffer is for temporary IO use */
#define BJ_Shadow 6 /* Buffer contents being shadowed to the log */
#define BJ_LogCtl 7 /* Buffer contains log descriptors */
#define BJ_Reserved 8 /* Buffer is reserved for access by journal */
#define BJ_Types 9
#ifdef __KERNEL__
extern spinlock_t jh_splice_lock;
/*
* Once `expr1' has been found true, take jh_splice_lock
* and then reevaluate everything.
*/
#define SPLICE_LOCK(expr1, expr2) \
({ \
int ret = (expr1); \
if (ret) { \
spin_lock(&jh_splice_lock); \
ret = (expr1) && (expr2); \
spin_unlock(&jh_splice_lock); \
} \
ret; \
})
/*
* A number of buffer state predicates. They test for
* buffer_jbd() because they are used in core kernel code.
*
* These will be racy on SMP unless we're *sure* that the
* buffer won't be detached from the journalling system
* in parallel.
*/
/* Return true if the buffer is on journal list `list' */
static inline int buffer_jlist_eq(struct buffer_head *bh, int list)
{
return SPLICE_LOCK(buffer_jbd(bh), bh2jh(bh)->b_jlist == list);
}
/* Return true if this bufer is dirty wrt the journal */
static inline int buffer_jdirty(struct buffer_head *bh)
{
return buffer_jbd(bh) && __buffer_state(bh, JBDDirty);
}
/* Return true if it's a data buffer which journalling is managing */
static inline int buffer_jbd_data(struct buffer_head *bh)
{
return SPLICE_LOCK(buffer_jbd(bh),
bh2jh(bh)->b_jlist == BJ_SyncData ||
bh2jh(bh)->b_jlist == BJ_AsyncData);
}
#ifdef CONFIG_SMP
#define assert_spin_locked(lock) J_ASSERT(spin_is_locked(lock))
#else
#define assert_spin_locked(lock) do {} while(0)
#endif
#define buffer_trace_init(bh) do {} while (0)
#define print_buffer_fields(bh) do {} while (0)
#define print_buffer_trace(bh) do {} while (0)
#define BUFFER_TRACE(bh, info) do {} while (0)
#define BUFFER_TRACE2(bh, bh2, info) do {} while (0)
#define JBUFFER_TRACE(jh, info) do {} while (0)
#endif /* __KERNEL__ */
#endif /* CONFIG_JBD || CONFIG_JBD_MODULE || !__KERNEL__ */
/*
* Compatibility no-ops which allow the kernel to compile without CONFIG_JBD
* go here.
*/
#if defined(__KERNEL__) && !(defined(CONFIG_JBD) || defined(CONFIG_JBD_MODULE))
#define J_ASSERT(expr) do {} while (0)
#define J_ASSERT_BH(bh, expr) do {} while (0)
#define buffer_jbd(bh) 0
#define buffer_jlist_eq(bh, val) 0
#define journal_buffer_journal_lru(bh) 0
#endif /* defined(__KERNEL__) && !defined(CONFIG_JBD) */
#endif /* _LINUX_JBD_H */

View File

@ -0,0 +1,41 @@
/*
* memory buffer pool support
*/
#ifndef _LINUX_MEMPOOL_H
#define _LINUX_MEMPOOL_H
#include <linux/list.h>
#include <linux/wait.h>
struct mempool_s;
typedef struct mempool_s mempool_t;
typedef void * (mempool_alloc_t)(int gfp_mask, void *pool_data);
typedef void (mempool_free_t)(void *element, void *pool_data);
struct mempool_s {
spinlock_t lock;
int min_nr, curr_nr;
struct list_head elements;
void *pool_data;
mempool_alloc_t *alloc;
mempool_free_t *free;
wait_queue_head_t wait;
};
extern mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
mempool_free_t *free_fn, void *pool_data);
extern void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);
extern void mempool_destroy(mempool_t *pool);
extern void * mempool_alloc(mempool_t *pool, int gfp_mask);
extern void mempool_free(void *element, mempool_t *pool);
/*
* A mempool_alloc_t and mempool_free_t that get the memory from
* a slab that is passed in through pool_data.
*/
void *mempool_alloc_slab(int gfp_mask, void *pool_data);
void mempool_free_slab(void *element, void *pool_data);
#endif /* _LINUX_MEMPOOL_H */

View File

@ -0,0 +1,65 @@
#ifndef __LINUX_VMALLOC_H
#define __LINUX_VMALLOC_H
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/spinlock.h>
#include <asm/pgtable.h>
/* bits in vm_struct->flags */
#define VM_IOREMAP 0x00000001 /* ioremap() and friends */
#define VM_ALLOC 0x00000002 /* vmalloc() */
struct vm_struct {
unsigned long flags;
void * addr;
unsigned long size;
struct vm_struct * next;
};
extern struct vm_struct * get_vm_area (unsigned long size, unsigned long flags);
extern void vfree(void * addr);
extern void * __vmalloc (unsigned long size, int gfp_mask, pgprot_t prot);
extern long vread(char *buf, char *addr, unsigned long count);
extern void vmfree_area_pages(unsigned long address, unsigned long size);
extern int vmalloc_area_pages(unsigned long address, unsigned long size,
int gfp_mask, pgprot_t prot);
extern void *vcalloc(unsigned long nmemb, unsigned long elem_size);
/*
* Allocate any pages
*/
static inline void * vmalloc (unsigned long size)
{
return __vmalloc(size, GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL);
}
/*
* Allocate ISA addressable pages for broke crap
*/
static inline void * vmalloc_dma (unsigned long size)
{
return __vmalloc(size, GFP_KERNEL|GFP_DMA, PAGE_KERNEL);
}
/*
* vmalloc 32bit PA addressable pages - eg for PCI 32bit devices
*/
static inline void * vmalloc_32(unsigned long size)
{
return __vmalloc(size, GFP_KERNEL, PAGE_KERNEL);
}
/*
* vmlist_lock is a read-write spinlock that protects vmlist
* Used in mm/vmalloc.c (get_vm_area() and vfree()) and fs/proc/kcore.c.
*/
extern rwlock_t vmlist_lock;
extern struct vm_struct * vmlist;
#endif

View File

@ -0,0 +1,3 @@
INTRO
README
VERSION

View File

@ -0,0 +1,7 @@
Patch-name: Linux device-mapper
Patch-id: device-mapper
Path-strip-level: 1
Architecture: all
Patch-file: patches/linux-2.4.19-devmapper-ioctl.patch
Kernel-version: 2.4.19

View File

@ -0,0 +1,2 @@
usr/include
usr/lib

View File

@ -0,0 +1,2 @@
README
INTRO

View File

@ -0,0 +1,2 @@
usr/include/libdevmapper.h
usr/lib/libdevmapper.so

View File

@ -0,0 +1 @@
lib

View File

@ -0,0 +1,4 @@
INTRO
README
VERSION
scripts/devmap_mknod.sh

View File

@ -0,0 +1 @@
lib/libdevmapper.so*

View File

@ -0,0 +1,18 @@
#! /bin/sh
#
# libdevmapper0 postinst script
# Andres Salomon <dilinger@mp3revolution.net>
# Note that it's perfectly acceptable for this stuff to fail, so long
# as individual packages using device-mapper (ie, lvm2) don't
# automatically assume this succeeded.
modprobe dm-mod >/dev/null 2>&1
# Create necessary files in /dev for device-mapper
if test -e /usr/share/doc/libdevmapper0/devmap_mknod.sh; then
sh /usr/share/doc/libdevmapper0/devmap_mknod.sh
fi
#DEBHELPER#

152
external/gpl2/libdevmapper/dist/debian/rules vendored Executable file
View File

@ -0,0 +1,152 @@
#!/usr/bin/make -f
# Sample debian/rules that uses debhelper.
# GNU copyright 1997 to 1999 by Joey Hess.
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
# This is the debhelper compatibility version to use.
export DH_COMPAT=3
# These are used for cross-compiling and for saving the configure script
# from having to guess our platform (since we know it already)
DEB_HOST_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_HOST_GNU_TYPE)
DEB_BUILD_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_BUILD_GNU_TYPE)
ifneq (,$(findstring debug,$(DEB_BUILD_OPTIONS)))
CFLAGS += -g
endif
ifeq (,$(findstring nostrip,$(DEB_BUILD_OPTIONS)))
INSTALL_PROGRAM += -s
endif
# shared library versions, option 1
version=2.0.5
major=2
# option 2, assuming the library is created as src/.libs/libfoo.so.2.0.5 or so
#version=`ls src/.libs/lib*.so.* | \
# awk '{if (match($$0,/[0-9]+\.[0-9]+\.[0-9]+$$/)) print substr($$0,RSTART)}'`
#major=`ls src/.libs/lib*.so.* | \
# awk '{if (match($$0,/\.so\.[0-9]+$$/)) print substr($$0,RSTART+4)}'`
# Note: header-update isn't part of the build system; it's run manually.
KERNEL=/usr/src/kernel-source-2.4.19.tar.bz2
PATCH=./patches/linux-2.4.19-devmapper-ioctl.patch
header-update:
@test -f $(KERNEL) || { \
echo "Error: $(KERNEL) doesn't exist!" 1>&2 && \
exit 1; \
}
@test -f $(PATCH) || { \
echo "Error: $(PATCH) doesn't exist!" 1>&2 && \
exit 1; \
}
chmod +x ./debian/scripts/*
if test `echo $(PATCH) | grep 'gz$$'`; then \
zcat $(PATCH) > debian/patch.diff; \
elif test `echo $(PATCH) | grep 'bz2$$'`; then \
bzcat $(PATCH) > debian/patch.diff; \
else \
cp $(PATCH) debian/patch.diff; \
fi
tar jxvf $(KERNEL) -C debian
rm -rf debian/include/*
cd debian && ./scripts/strippatch.pl ./patch.diff | \
./scripts/includes.pl ./kernel-source-* | patch -p1
rm -rf debian/kernel-source-* debian/patch.diff
configure: configure-stamp
configure-stamp:
dh_testdir
# Add here commands to configure the package.
./configure --host=$(DEB_HOST_GNU_TYPE) --build=$(DEB_BUILD_GNU_TYPE) \
--prefix=/usr --mandir=\$${prefix}/share/man \
--infodir=\$${prefix}/share/info \
--libdir=$(CURDIR)/debian/tmp/lib \
--with-kernel-dir=$(CURDIR)/debian \
--with-kernel-version=$(KERNEL)
touch configure-stamp
build: build-stamp
build-stamp: configure-stamp
dh_testdir
# Build
$(MAKE)
touch build-stamp
clean:
dh_testdir
dh_testroot
rm -f build-stamp configure-stamp
# Add here commands to clean up after the build process.
-$(MAKE) distclean
-test -r /usr/share/misc/config.sub && \
cp -f /usr/share/misc/config.sub config.sub
-test -r /usr/share/misc/config.guess && \
cp -f /usr/share/misc/config.guess config.guess
dh_clean
install: build
dh_testdir
dh_testroot
dh_clean -k
dh_installdirs
# Add here commands to install the package into debian/tmp
$(MAKE) install prefix=$(CURDIR)/debian/tmp/usr
# libdevmapper-dev should have its .so in /usr/lib.
rm -f $(CURDIR)/debian/tmp/lib/libdevmapper.so
install -d $(CURDIR)/debian/tmp/usr/lib
ln -s /lib/libdevmapper.so.0.96 \
$(CURDIR)/debian/tmp/usr/lib/libdevmapper.so
ln -s libdevmapper.so.0.96 $(CURDIR)/debian/tmp/lib/libdevmapper.so.0
# Build architecture-independent files here.
binary-indep: build install
# We have nothing to do by default.
# Build architecture-dependent files here.
binary-arch: build install
dh_testdir
dh_testroot
dh_movefiles
# dh_installdebconf
dh_installdocs
dh_installexamples
dh_installmenu
# dh_installlogrotate
# dh_installemacsen
# dh_installpam
# dh_installmime
# dh_installinit
dh_installcron
dh_installman
dh_installinfo
# dh_undocumented
dh_installchangelogs
dh_installkpatches
dh_link
dh_strip
dh_compress
dh_fixperms
dh_makeshlibs -V
dh_installdeb
# dh_perl
dh_shlibdeps
dh_gencontrol
dh_md5sums
dh_builddeb
binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install configure

View File

@ -0,0 +1,35 @@
#!/usr/bin/perl -w
# Given a diff and a source tree, make local copies of all modified files
# in the source tree. The patch is read from stdin, and spit back out
# to stdout.
use strict;
die "Usage: $0 <source tree>\n" unless (@ARGV == 1);
foreach (<STDIN>) {
if (/^diff\s+/) {
my $file = (split /\s+/)[-1];
my @temp = split /\/+/, $file;
shift @temp;
my $x = pop @temp;
my $dir = join '/', @temp;
push @temp, $x;
$file = join '/', @temp;
if (-e "$ARGV[0]/$file") {
system ("mkdir -p $dir") == 0 ||
die "Error: `mkdir -p $dir` failed.\n";
system ("cp $ARGV[0]/$file $dir") == 0 ||
die "Error: `cp $ARGV[0]$file $dir` failed.\n";
}
else {
print STDERR "Warning: cannot find $ARGV[0]/$file.\n"
}
}
print;
}

View File

@ -0,0 +1,27 @@
#!/usr/bin/perl -w
# Strip out non-header files from a diff.
use strict;
die "Usage: $0 <patchfile>\n" unless (@ARGV == 1);
my $ctx = '';
open (F, $ARGV[0]) || die "Error: can't open $ARGV[0]: $!\n";
while (<F>) {
if (/^diff\s+/) {
if ($ctx) {
print $ctx;
$ctx = '';
}
if (/\/include\/.*\.h\s*$/) {
$ctx = $_;
}
}
elsif ($ctx) {
$ctx .= $_;
}
}
close (F);

View File

@ -0,0 +1,19 @@
dm_event_handler_create
dm_event_handler_destroy
dm_event_handler_set_dso
dm_event_handler_set_dev_name
dm_event_handler_set_uuid
dm_event_handler_set_major
dm_event_handler_set_minor
dm_event_handler_set_event_mask
dm_event_handler_get_dso
dm_event_handler_get_devname
dm_event_handler_get_uuid
dm_event_handler_get_major
dm_event_handler_get_minor
dm_event_handler_get_event_mask
dm_event_register_handler
dm_event_unregister_handler
dm_event_get_registered_device
dm_event_handler_set_timeout
dm_event_handler_get_timeout

View File

@ -0,0 +1,83 @@
#
# Copyright (C) 2005-2007 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
SOURCES = libdevmapper-event.c
LIB_STATIC = libdevmapper-event.a
ifeq ("@LIB_SUFFIX@","dylib")
LIB_SHARED = libdevmapper-event.dylib
else
LIB_SHARED = libdevmapper-event.so
endif
TARGETS = dmeventd
CLEAN_TARGETS = dmeventd.o
include ../make.tmpl
LDFLAGS += -ldl -ldevmapper -lpthread
CLDFLAGS += -ldl -ldevmapper -lpthread
dmeventd: $(LIB_SHARED) dmeventd.o
$(CC) -o $@ dmeventd.o $(CFLAGS) $(LDFLAGS) \
-L. -ldevmapper-event $(LIBS) -rdynamic
.PHONY: install_dynamic install_static install_include \
install_pkgconfig install_dmeventd
INSTALL_TYPE = install_dynamic
ifeq ("@STATIC_LINK@", "yes")
INSTALL_TYPE += install_static
endif
ifeq ("@PKGCONFIG@", "yes")
INSTALL_TYPE += install_pkgconfig
endif
install: $(INSTALL_TYPE) install_include install_dmeventd
install_include:
$(INSTALL) -D $(OWNER) $(GROUP) -m 444 libdevmapper-event.h \
$(includedir)/libdevmapper-event.h
install_dynamic: libdevmapper-event.$(LIB_SUFFIX)
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< \
$(libdir)/libdevmapper-event.$(LIB_SUFFIX).$(LIB_VERSION)
$(LN_S) -f libdevmapper-event.$(LIB_SUFFIX).$(LIB_VERSION) \
$(libdir)/libdevmapper-event.$(LIB_SUFFIX)
install_dmeventd: dmeventd
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< $(sbindir)/$<
install_pkgconfig:
$(INSTALL) -D $(OWNER) $(GROUP) -m 444 libdevmapper-event.pc \
$(usrlibdir)/pkgconfig/devmapper-event.pc
install_static: libdevmapper-event.a
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< \
$(libdir)/libdevmapper-event.a.$(LIB_VERSION)
$(LN_S) -f libdevmapper-event.a.$(LIB_VERSION) $(libdir)/libdevmapper-event.a
.PHONY: distclean_lib distclean
distclean_lib:
$(RM) libdevmapper-event.pc
distclean: distclean_lib

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,66 @@
/*
* Copyright (C) 2005-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef __DMEVENTD_DOT_H__
#define __DMEVENTD_DOT_H__
/* FIXME This stuff must be configurable. */
#define DM_EVENT_DAEMON "/sbin/dmeventd"
#define DM_EVENT_LOCKFILE "/var/lock/dmeventd"
#define DM_EVENT_FIFO_CLIENT "/var/run/dmeventd-client"
#define DM_EVENT_FIFO_SERVER "/var/run/dmeventd-server"
#define DM_EVENT_PIDFILE "/var/run/dmeventd.pid"
#define DM_EVENT_DEFAULT_TIMEOUT 10
/* Commands for the daemon passed in the message below. */
enum dm_event_command {
DM_EVENT_CMD_ACTIVE = 1,
DM_EVENT_CMD_REGISTER_FOR_EVENT,
DM_EVENT_CMD_UNREGISTER_FOR_EVENT,
DM_EVENT_CMD_GET_REGISTERED_DEVICE,
DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE,
DM_EVENT_CMD_SET_TIMEOUT,
DM_EVENT_CMD_GET_TIMEOUT,
DM_EVENT_CMD_HELLO,
};
/* Message passed between client and daemon. */
struct dm_event_daemon_message {
uint32_t cmd;
uint32_t size;
char *data;
};
/* FIXME Is this meant to be exported? I can't see where the
interface uses it. */
/* Fifos for client/daemon communication. */
struct dm_event_fifos {
int client;
int server;
const char *client_path;
const char *server_path;
};
/* EXIT_SUCCESS 0 -- stdlib.h */
/* EXIT_FAILURE 1 -- stdlib.h */
#define EXIT_LOCKFILE_INUSE 2
#define EXIT_DESC_CLOSE_FAILURE 3
#define EXIT_DESC_OPEN_FAILURE 4
#define EXIT_OPEN_PID_FAILURE 5
#define EXIT_FIFO_FAILURE 6
#define EXIT_CHDIR_FAILURE 7
#endif /* __DMEVENTD_DOT_H__ */

View File

@ -0,0 +1,809 @@
/*
* Copyright (C) 2005-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "lib.h"
#include "libdevmapper-event.h"
//#include "libmultilog.h"
#include "dmeventd.h"
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <sys/file.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <sys/wait.h>
#include <arpa/inet.h> /* for htonl, ntohl */
static int _sequence_nr = 0;
struct dm_event_handler {
char *dso;
char *dev_name;
char *uuid;
int major;
int minor;
uint32_t timeout;
enum dm_event_mask mask;
};
static void _dm_event_handler_clear_dev_info(struct dm_event_handler *dmevh)
{
if (dmevh->dev_name)
dm_free(dmevh->dev_name);
if (dmevh->uuid)
dm_free(dmevh->uuid);
dmevh->dev_name = dmevh->uuid = NULL;
dmevh->major = dmevh->minor = 0;
}
struct dm_event_handler *dm_event_handler_create(void)
{
struct dm_event_handler *dmevh = NULL;
if (!(dmevh = dm_malloc(sizeof(*dmevh))))
return NULL;
dmevh->dso = dmevh->dev_name = dmevh->uuid = NULL;
dmevh->major = dmevh->minor = 0;
dmevh->mask = 0;
dmevh->timeout = 0;
return dmevh;
}
void dm_event_handler_destroy(struct dm_event_handler *dmevh)
{
_dm_event_handler_clear_dev_info(dmevh);
if (dmevh->dso)
dm_free(dmevh->dso);
dm_free(dmevh);
}
int dm_event_handler_set_dso(struct dm_event_handler *dmevh, const char *path)
{
if (!path) /* noop */
return 0;
if (dmevh->dso)
dm_free(dmevh->dso);
dmevh->dso = dm_strdup(path);
if (!dmevh->dso)
return -ENOMEM;
return 0;
}
int dm_event_handler_set_dev_name(struct dm_event_handler *dmevh, const char *dev_name)
{
if (!dev_name)
return 0;
_dm_event_handler_clear_dev_info(dmevh);
dmevh->dev_name = dm_strdup(dev_name);
if (!dmevh->dev_name)
return -ENOMEM;
return 0;
}
int dm_event_handler_set_uuid(struct dm_event_handler *dmevh, const char *uuid)
{
if (!uuid)
return 0;
_dm_event_handler_clear_dev_info(dmevh);
dmevh->uuid = dm_strdup(uuid);
if (!dmevh->dev_name)
return -ENOMEM;
return 0;
}
void dm_event_handler_set_major(struct dm_event_handler *dmevh, int major)
{
int minor = dmevh->minor;
_dm_event_handler_clear_dev_info(dmevh);
dmevh->major = major;
dmevh->minor = minor;
}
void dm_event_handler_set_minor(struct dm_event_handler *dmevh, int minor)
{
int major = dmevh->major;
_dm_event_handler_clear_dev_info(dmevh);
dmevh->major = major;
dmevh->minor = minor;
}
void dm_event_handler_set_event_mask(struct dm_event_handler *dmevh,
enum dm_event_mask evmask)
{
dmevh->mask = evmask;
}
void dm_event_handler_set_timeout(struct dm_event_handler *dmevh, int timeout)
{
dmevh->timeout = timeout;
}
const char *dm_event_handler_get_dso(const struct dm_event_handler *dmevh)
{
return dmevh->dso;
}
const char *dm_event_handler_get_dev_name(const struct dm_event_handler *dmevh)
{
return dmevh->dev_name;
}
const char *dm_event_handler_get_uuid(const struct dm_event_handler *dmevh)
{
return dmevh->uuid;
}
int dm_event_handler_get_major(const struct dm_event_handler *dmevh)
{
return dmevh->major;
}
int dm_event_handler_get_minor(const struct dm_event_handler *dmevh)
{
return dmevh->minor;
}
int dm_event_handler_get_timeout(const struct dm_event_handler *dmevh)
{
return dmevh->timeout;
}
enum dm_event_mask dm_event_handler_get_event_mask(const struct dm_event_handler *dmevh)
{
return dmevh->mask;
}
static int _check_message_id(struct dm_event_daemon_message *msg)
{
int pid, seq_nr;
if ((sscanf(msg->data, "%d:%d", &pid, &seq_nr) != 2) ||
(pid != getpid()) || (seq_nr != _sequence_nr)) {
log_error("Ignoring out-of-sequence reply from dmeventd. "
"Expected %d:%d but received %s", getpid(),
_sequence_nr, msg->data);
return 0;
}
return 1;
}
/*
* daemon_read
* @fifos
* @msg
*
* Read message from daemon.
*
* Returns: 0 on failure, 1 on success
*/
static int _daemon_read(struct dm_event_fifos *fifos,
struct dm_event_daemon_message *msg)
{
unsigned bytes = 0;
int ret, i;
fd_set fds;
struct timeval tval = { 0, 0 };
size_t size = 2 * sizeof(uint32_t); /* status + size */
char *buf = alloca(size);
int header = 1;
while (bytes < size) {
for (i = 0, ret = 0; (i < 20) && (ret < 1); i++) {
/* Watch daemon read FIFO for input. */
FD_ZERO(&fds);
FD_SET(fifos->server, &fds);
tval.tv_sec = 1;
ret = select(fifos->server + 1, &fds, NULL, NULL,
&tval);
if (ret < 0 && errno != EINTR) {
log_error("Unable to read from event server");
return 0;
}
}
if (ret < 1) {
log_error("Unable to read from event server.");
return 0;
}
ret = read(fifos->server, buf + bytes, size);
if (ret < 0) {
if ((errno == EINTR) || (errno == EAGAIN))
continue;
else {
log_error("Unable to read from event server.");
return 0;
}
}
bytes += ret;
if (bytes == 2 * sizeof(uint32_t) && header) {
msg->cmd = ntohl(*((uint32_t *)buf));
msg->size = ntohl(*((uint32_t *)buf + 1));
buf = msg->data = dm_malloc(msg->size);
size = msg->size;
bytes = 0;
header = 0;
}
}
if (bytes != size) {
if (msg->data)
dm_free(msg->data);
msg->data = NULL;
}
return bytes == size;
}
/* Write message to daemon. */
static int _daemon_write(struct dm_event_fifos *fifos,
struct dm_event_daemon_message *msg)
{
unsigned bytes = 0;
int ret = 0;
fd_set fds;
size_t size = 2 * sizeof(uint32_t) + msg->size;
char *buf = alloca(size);
char drainbuf[128];
struct timeval tval = { 0, 0 };
*((uint32_t *)buf) = htonl(msg->cmd);
*((uint32_t *)buf + 1) = htonl(msg->size);
memcpy(buf + 2 * sizeof(uint32_t), msg->data, msg->size);
/* drain the answer fifo */
while (1) {
FD_ZERO(&fds);
FD_SET(fifos->server, &fds);
tval.tv_usec = 100;
ret = select(fifos->server + 1, &fds, NULL, NULL, &tval);
if ((ret < 0) && (errno != EINTR)) {
log_error("Unable to talk to event daemon");
return 0;
}
if (ret == 0)
break;
read(fifos->server, drainbuf, 127);
}
while (bytes < size) {
do {
/* Watch daemon write FIFO to be ready for output. */
FD_ZERO(&fds);
FD_SET(fifos->client, &fds);
ret = select(fifos->client + 1, NULL, &fds, NULL, NULL);
if ((ret < 0) && (errno != EINTR)) {
log_error("Unable to talk to event daemon");
return 0;
}
} while (ret < 1);
ret = write(fifos->client, ((char *) buf) + bytes,
size - bytes);
if (ret < 0) {
if ((errno == EINTR) || (errno == EAGAIN))
continue;
else {
log_error("Unable to talk to event daemon");
return 0;
}
}
bytes += ret;
}
return bytes == size;
}
static int _daemon_talk(struct dm_event_fifos *fifos,
struct dm_event_daemon_message *msg, int cmd,
const char *dso_name, const char *dev_name,
enum dm_event_mask evmask, uint32_t timeout)
{
const char *dso = dso_name ? dso_name : "";
const char *dev = dev_name ? dev_name : "";
const char *fmt = "%d:%d %s %s %u %" PRIu32;
int msg_size;
memset(msg, 0, sizeof(*msg));
/*
* Set command and pack the arguments
* into ASCII message string.
*/
msg->cmd = cmd;
if (cmd == DM_EVENT_CMD_HELLO)
fmt = "%d:%d HELLO";
if ((msg_size = dm_asprintf(&(msg->data), fmt, getpid(), _sequence_nr,
dso, dev, evmask, timeout)) < 0) {
log_error("_daemon_talk: message allocation failed");
return -ENOMEM;
}
msg->size = msg_size;
/*
* Write command and message to and
* read status return code from daemon.
*/
if (!_daemon_write(fifos, msg)) {
stack;
dm_free(msg->data);
msg->data = 0;
return -EIO;
}
do {
if (msg->data)
dm_free(msg->data);
msg->data = 0;
if (!_daemon_read(fifos, msg)) {
stack;
return -EIO;
}
} while (!_check_message_id(msg));
_sequence_nr++;
return (int32_t) msg->cmd;
}
/*
* start_daemon
*
* This function forks off a process (dmeventd) that will handle
* the events. I am currently test opening one of the fifos to
* ensure that the daemon is running and listening... I thought
* this would be less expensive than fork/exec'ing every time.
* Perhaps there is an even quicker/better way (no, checking the
* lock file is _not_ a better way).
*
* Returns: 1 on success, 0 otherwise
*/
static int _start_daemon(struct dm_event_fifos *fifos)
{
int pid, ret = 0;
int status;
struct stat statbuf;
if (stat(fifos->client_path, &statbuf))
goto start_server;
if (!S_ISFIFO(statbuf.st_mode)) {
log_error("%s is not a fifo.", fifos->client_path);
return 0;
}
/* Anyone listening? If not, errno will be ENXIO */
fifos->client = open(fifos->client_path, O_WRONLY | O_NONBLOCK);
if (fifos->client >= 0) {
/* server is running and listening */
close(fifos->client);
return 1;
} else if (errno != ENXIO) {
/* problem */
log_error("%s: Can't open client fifo %s: %s",
__func__, fifos->client_path, strerror(errno));
stack;
return 0;
}
start_server:
/* server is not running */
pid = fork();
if (pid < 0)
log_error("Unable to fork.");
else if (!pid) {
execvp(DMEVENTD_PATH, NULL);
exit(EXIT_FAILURE);
} else {
if (waitpid(pid, &status, 0) < 0)
log_error("Unable to start dmeventd: %s",
strerror(errno));
else if (WEXITSTATUS(status))
log_error("Unable to start dmeventd.");
else
ret = 1;
}
return ret;
}
/* Initialize client. */
static int _init_client(struct dm_event_fifos *fifos)
{
/* FIXME? Is fifo the most suitable method? Why not share
comms/daemon code with something else e.g. multipath? */
/* init fifos */
memset(fifos, 0, sizeof(*fifos));
fifos->client_path = DM_EVENT_FIFO_CLIENT;
fifos->server_path = DM_EVENT_FIFO_SERVER;
if (!_start_daemon(fifos)) {
stack;
return 0;
}
/* Open the fifo used to read from the daemon. */
if ((fifos->server = open(fifos->server_path, O_RDWR)) < 0) {
log_error("%s: open server fifo %s",
__func__, fifos->server_path);
stack;
return 0;
}
/* Lock out anyone else trying to do communication with the daemon. */
if (flock(fifos->server, LOCK_EX) < 0) {
log_error("%s: flock %s", __func__, fifos->server_path);
close(fifos->server);
return 0;
}
/* if ((fifos->client = open(fifos->client_path, O_WRONLY | O_NONBLOCK)) < 0) {*/
if ((fifos->client = open(fifos->client_path, O_RDWR | O_NONBLOCK)) < 0) {
log_error("%s: Can't open client fifo %s: %s",
__func__, fifos->client_path, strerror(errno));
close(fifos->server);
stack;
return 0;
}
return 1;
}
static void _dtr_client(struct dm_event_fifos *fifos)
{
if (flock(fifos->server, LOCK_UN))
log_error("flock unlock %s", fifos->server_path);
close(fifos->client);
close(fifos->server);
}
/* Get uuid of a device */
static struct dm_task *_get_device_info(const struct dm_event_handler *dmevh)
{
struct dm_task *dmt;
struct dm_info info;
if (!(dmt = dm_task_create(DM_DEVICE_INFO))) {
log_error("_get_device_info: dm_task creation for info failed");
return NULL;
}
if (dmevh->uuid)
dm_task_set_uuid(dmt, dmevh->uuid);
else if (dmevh->dev_name)
dm_task_set_name(dmt, dmevh->dev_name);
else if (dmevh->major && dmevh->minor) {
dm_task_set_major(dmt, dmevh->major);
dm_task_set_minor(dmt, dmevh->minor);
}
/* FIXME Add name or uuid or devno to messages */
if (!dm_task_run(dmt)) {
log_error("_get_device_info: dm_task_run() failed");
goto failed;
}
if (!dm_task_get_info(dmt, &info)) {
log_error("_get_device_info: failed to get info for device");
goto failed;
}
if (!info.exists) {
log_error("_get_device_info: device not found");
goto failed;
}
return dmt;
failed:
dm_task_destroy(dmt);
return NULL;
}
/* Handle the event (de)registration call and return negative error codes. */
static int _do_event(int cmd, struct dm_event_daemon_message *msg,
const char *dso_name, const char *dev_name,
enum dm_event_mask evmask, uint32_t timeout)
{
int ret;
struct dm_event_fifos fifos;
if (!_init_client(&fifos)) {
stack;
return -ESRCH;
}
ret = _daemon_talk(&fifos, msg, DM_EVENT_CMD_HELLO, 0, 0, 0, 0);
if (msg->data)
dm_free(msg->data);
msg->data = 0;
if (!ret)
ret = _daemon_talk(&fifos, msg, cmd, dso_name, dev_name, evmask, timeout);
/* what is the opposite of init? */
_dtr_client(&fifos);
return ret;
}
/* External library interface. */
int dm_event_register_handler(const struct dm_event_handler *dmevh)
{
int ret = 1, err;
const char *uuid;
struct dm_task *dmt;
struct dm_event_daemon_message msg = { 0, 0, NULL };
if (!(dmt = _get_device_info(dmevh))) {
stack;
return 0;
}
uuid = dm_task_get_uuid(dmt);
if ((err = _do_event(DM_EVENT_CMD_REGISTER_FOR_EVENT, &msg,
dmevh->dso, uuid, dmevh->mask, dmevh->timeout)) < 0) {
log_error("%s: event registration failed: %s",
dm_task_get_name(dmt),
msg.data ? msg.data : strerror(-err));
ret = 0;
}
if (msg.data)
dm_free(msg.data);
dm_task_destroy(dmt);
return ret;
}
int dm_event_unregister_handler(const struct dm_event_handler *dmevh)
{
int ret = 1, err;
const char *uuid;
struct dm_task *dmt;
struct dm_event_daemon_message msg = { 0, 0, NULL };
if (!(dmt = _get_device_info(dmevh))) {
stack;
return 0;
}
uuid = dm_task_get_uuid(dmt);
if ((err = _do_event(DM_EVENT_CMD_UNREGISTER_FOR_EVENT, &msg,
dmevh->dso, uuid, dmevh->mask, dmevh->timeout)) < 0) {
log_error("%s: event deregistration failed: %s",
dm_task_get_name(dmt),
msg.data ? msg.data : strerror(-err));
ret = 0;
}
if (msg.data)
dm_free(msg.data);
dm_task_destroy(dmt);
return ret;
}
/* Fetch a string off src and duplicate it into *dest. */
/* FIXME: move to separate module to share with the daemon. */
static char *_fetch_string(char **src, const int delimiter)
{
char *p, *ret;
if ((p = strchr(*src, delimiter)))
*p = 0;
if ((ret = dm_strdup(*src)))
*src += strlen(ret) + 1;
if (p)
*p = delimiter;
return ret;
}
/* Parse a device message from the daemon. */
static int _parse_message(struct dm_event_daemon_message *msg, char **dso_name,
char **uuid, enum dm_event_mask *evmask)
{
char *id = NULL;
char *p = msg->data;
if ((id = _fetch_string(&p, ' ')) &&
(*dso_name = _fetch_string(&p, ' ')) &&
(*uuid = _fetch_string(&p, ' '))) {
*evmask = atoi(p);
dm_free(id);
return 0;
}
if (id)
dm_free(id);
return -ENOMEM;
}
/*
* Returns 0 if handler found; error (-ENOMEM, -ENOENT) otherwise.
*/
int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
{
int ret = 0;
const char *uuid = NULL;
char *reply_dso = NULL, *reply_uuid = NULL;
enum dm_event_mask reply_mask = 0;
struct dm_task *dmt = NULL;
struct dm_event_daemon_message msg = { 0, 0, NULL };
if (!(dmt = _get_device_info(dmevh))) {
stack;
return 0;
}
uuid = dm_task_get_uuid(dmt);
if (!(ret = _do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE,
&msg, dmevh->dso, uuid, dmevh->mask, 0))) {
/* FIXME this will probably horribly break if we get
ill-formatted reply */
ret = _parse_message(&msg, &reply_dso, &reply_uuid, &reply_mask);
} else {
ret = -ENOENT;
goto fail;
}
dm_task_destroy(dmt);
dmt = NULL;
if (msg.data) {
dm_free(msg.data);
msg.data = NULL;
}
_dm_event_handler_clear_dev_info(dmevh);
dmevh->uuid = dm_strdup(reply_uuid);
if (!dmevh->uuid) {
ret = -ENOMEM;
goto fail;
}
if (!(dmt = _get_device_info(dmevh))) {
ret = -ENXIO; /* dmeventd probably gave us bogus uuid back */
goto fail;
}
dm_event_handler_set_dso(dmevh, reply_dso);
dm_event_handler_set_event_mask(dmevh, reply_mask);
if (reply_dso) {
dm_free(reply_dso);
reply_dso = NULL;
}
if (reply_uuid) {
dm_free(reply_uuid);
reply_uuid = NULL;
}
dmevh->dev_name = dm_strdup(dm_task_get_name(dmt));
if (!dmevh->dev_name) {
ret = -ENOMEM;
goto fail;
}
struct dm_info info;
if (!dm_task_get_info(dmt, &info)) {
ret = -1;
goto fail;
}
dmevh->major = info.major;
dmevh->minor = info.minor;
dm_task_destroy(dmt);
return ret;
fail:
if (msg.data)
dm_free(msg.data);
if (reply_dso)
dm_free(reply_dso);
if (reply_uuid)
dm_free(reply_uuid);
_dm_event_handler_clear_dev_info(dmevh);
if (dmt)
dm_task_destroy(dmt);
return ret;
}
#if 0 /* left out for now */
static char *_skip_string(char *src, const int delimiter)
{
src = srtchr(src, delimiter);
if (src && *(src + 1))
return src + 1;
return NULL;
}
int dm_event_set_timeout(const char *device_path, uint32_t timeout)
{
struct dm_event_daemon_message msg = { 0, 0, NULL };
if (!device_exists(device_path))
return -ENODEV;
return _do_event(DM_EVENT_CMD_SET_TIMEOUT, &msg,
NULL, device_path, 0, timeout);
}
int dm_event_get_timeout(const char *device_path, uint32_t *timeout)
{
int ret;
struct dm_event_daemon_message msg = { 0, 0, NULL };
if (!device_exists(device_path))
return -ENODEV;
if (!(ret = _do_event(DM_EVENT_CMD_GET_TIMEOUT, &msg, NULL, device_path,
0, 0))) {
char *p = _skip_string(msg.data, ' ');
if (!p) {
log_error("malformed reply from dmeventd '%s'\n",
msg.data);
return -EIO;
}
*timeout = atoi(p);
}
if (msg.data)
dm_free(msg.data);
return ret;
}
#endif

View File

@ -0,0 +1,106 @@
/*
* Copyright (C) 2005-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
/*
* Note that this file is released only as part of a technology preview
* and its contents may change in future updates in ways that do not
* preserve compatibility.
*/
#ifndef LIB_DMEVENT_H
#define LIB_DMEVENT_H
#include <stdint.h>
/*
* Event library interface.
*/
enum dm_event_mask {
DM_EVENT_SETTINGS_MASK = 0x0000FF,
DM_EVENT_SINGLE = 0x000001, /* Report multiple errors just once. */
DM_EVENT_MULTI = 0x000002, /* Report all of them. */
DM_EVENT_ERROR_MASK = 0x00FF00,
DM_EVENT_SECTOR_ERROR = 0x000100, /* Failure on a particular sector. */
DM_EVENT_DEVICE_ERROR = 0x000200, /* Device failure. */
DM_EVENT_PATH_ERROR = 0x000400, /* Failure on an io path. */
DM_EVENT_ADAPTOR_ERROR = 0x000800, /* Failure of a host adaptor. */
DM_EVENT_STATUS_MASK = 0xFF0000,
DM_EVENT_SYNC_STATUS = 0x010000, /* Mirror synchronization completed/failed. */
DM_EVENT_TIMEOUT = 0x020000, /* Timeout has occured */
DM_EVENT_REGISTRATION_PENDING = 0x1000000, /* Monitor thread is setting-up/shutting-down */
};
#define DM_EVENT_ALL_ERRORS DM_EVENT_ERROR_MASK
struct dm_event_handler;
struct dm_event_handler *dm_event_handler_create(void);
void dm_event_handler_destroy(struct dm_event_handler *dmevh);
/*
* Path of shared library to handle events.
*
* All of dso, device_name and uuid strings are duplicated, you do not
* need to keep the pointers valid after the call succeeds. Thes may
* return -ENOMEM though.
*/
int dm_event_handler_set_dso(struct dm_event_handler *dmevh, const char *path);
/*
* Identify the device to monitor by exactly one of device_name, uuid or
* device number. String arguments are duplicated, see above.
*/
int dm_event_handler_set_dev_name(struct dm_event_handler *dmevh, const char *device_name);
int dm_event_handler_set_uuid(struct dm_event_handler *dmevh, const char *uuid);
void dm_event_handler_set_major(struct dm_event_handler *dmevh, int major);
void dm_event_handler_set_minor(struct dm_event_handler *dmevh, int minor);
void dm_event_handler_set_timeout(struct dm_event_handler *dmevh, int timeout);
/*
* Specify mask for events to monitor.
*/
void dm_event_handler_set_event_mask(struct dm_event_handler *dmevh,
enum dm_event_mask evmask);
const char *dm_event_handler_get_dso(const struct dm_event_handler *dmevh);
const char *dm_event_handler_get_dev_name(const struct dm_event_handler *dmevh);
const char *dm_event_handler_get_uuid(const struct dm_event_handler *dmevh);
int dm_event_handler_get_major(const struct dm_event_handler *dmevh);
int dm_event_handler_get_minor(const struct dm_event_handler *dmevh);
int dm_event_handler_get_timeout(const struct dm_event_handler *dmevh);
enum dm_event_mask dm_event_handler_get_event_mask(const struct dm_event_handler *dmevh);
/* FIXME Review interface (what about this next thing?) */
int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next);
/*
* Initiate monitoring using dmeventd.
*/
int dm_event_register_handler(const struct dm_event_handler *dmevh);
int dm_event_unregister_handler(const struct dm_event_handler *dmevh);
/* Prototypes for DSO interface, see dmeventd.c, struct dso_data for
detailed descriptions. */
void process_event(struct dm_task *dmt, enum dm_event_mask evmask, void **user);
int register_device(const char *device_name, const char *uuid, int major, int minor, void **user);
int unregister_device(const char *device_name, const char *uuid, int major,
int minor, void **user);
#endif

View File

@ -0,0 +1,12 @@
prefix=@prefix@
exec_prefix=@exec_prefix@
libdir=@libdir@
includedir=@includedir@
Name: devmapper-event
Description: device-mapper event library
Version: @DM_LIB_VERSION@
Requires: devmapper
Cflags: -I${includedir}
Libs: -L${libdir} -ldevmapper-event
Libs.private: -lpthread -ldl

View File

@ -0,0 +1,3 @@
process_event
register_device
unregister_device

View File

@ -0,0 +1,51 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
TARGETS = dmevent dmeventd
INSTALL_TYPE = install_dynamic
SOURCES = noop.c
CLEAN_TARGETS = dmevent.o dmeventd.o
ifeq ("@LIB_SUFFIX@","dylib")
LIB_SHARED = libdmeventdnoop.dylib
else
LIB_SHARED = libdmeventdnoop.so
endif
LDFLAGS += -ldl -ldevmapper -lmultilog
include ../make.tmpl
libdmeventdnoop.so: noop.o
dmevent: dmevent.o $(interfacedir)/libdevmapper.$(LIB_SUFFIX) $(top_srcdir)/lib/event/libdmevent.$(LIB_SUFFIX)
$(CC) -o $@ dmevent.o $(LDFLAGS) \
-L$(interfacedir) -L$(DESTDIR)/lib -L$(top_srcdir)/lib/event -L$(top_srcdir)/multilog $(LIBS)
dmeventd: dmeventd.o $(interfacedir)/libdevmapper.$(LIB_SUFFIX) $(top_srcdir)/lib/event/libdmevent.$(LIB_SUFFIX)
$(CC) -o $@ dmeventd.o $(LDFLAGS) \
-L$(interfacedir) -L$(DESTDIR)/lib -L$(top_srcdir)/lib/event -L$(top_srcdir)/multilog -lpthread -ldmevent $(LIBS)
install: $(INSTALL_TYPE)
.PHONY: install_dynamic
install_dynamic: dmeventd
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) dmeventd $(sbindir)/dmeventd

View File

@ -0,0 +1,238 @@
/*
* Copyright (C) 2005 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "libdevmapper.h"
#include "libdm-event.h"
#include "libmultilog.h"
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <sys/file.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <dlfcn.h>
static enum event_type events = ALL_ERRORS; /* All until we can distinguish. */
static char default_dso_name[] = "noop"; /* default DSO is noop */
static int default_reg = 1; /* default action is register */
static uint32_t timeout;
struct event_ops {
int (*dm_register_for_event)(char *dso_name, char *device,
enum event_type event_types);
int (*dm_unregister_for_event)(char *dso_name, char *device,
enum event_type event_types);
int (*dm_get_registered_device)(char **dso_name, char **device,
enum event_type *event_types, int next);
int (*dm_set_event_timeout)(char *device, uint32_t time);
int (*dm_get_event_timeout)(char *device, uint32_t *time);
};
/* Display help. */
static void print_usage(char *name)
{
char *cmd = strrchr(name, '/');
cmd = cmd ? cmd + 1 : name;
printf("Usage::\n"
"%s [options] <device>\n"
"\n"
"Options:\n"
" -d <dso> Specify the DSO to use.\n"
" -h Print this usage.\n"
" -l List registered devices.\n"
" -r Register for event (default).\n"
" -t <timeout> (un)register for timeout event.\n"
" -u Unregister for event.\n"
"\n", cmd);
}
/* Parse command line arguments. */
static int parse_argv(int argc, char **argv, char **dso_name_arg,
char **device_arg, int *reg, int *list)
{
int c;
const char *options = "d:hlrt:u";
while ((c = getopt(argc, argv, options)) != -1) {
switch (c) {
case 'd':
*dso_name_arg = optarg;
break;
case 'h':
print_usage(argv[0]);
exit(EXIT_SUCCESS);
case 'l':
*list = 1;
break;
case 'r':
*reg = 1;
break;
case 't':
events = TIMEOUT;
if (sscanf(optarg, "%"SCNu32, &timeout) != 1){
fprintf(stderr, "invalid timeout '%s'\n",
optarg);
timeout = 0;
}
break;
case 'u':
*reg = 0;
break;
default:
fprintf(stderr, "Unknown option '%c'.\n"
"Try '-h' for help.\n", c);
return 0;
}
}
if (optind >= argc) {
if (!*list) {
fprintf(stderr, "You need to specify a device.\n");
return 0;
}
} else
*device_arg = argv[optind];
return 1;
}
static int lookup_symbol(void *dl, void **symbol, const char *name)
{
if ((*symbol = dlsym(dl, name)))
return 1;
fprintf(stderr, "error looking up %s symbol: %s\n", name, dlerror());
return 0;
}
static int lookup_symbols(void *dl, struct event_ops *e)
{
return lookup_symbol(dl, (void *) &e->dm_register_for_event,
"dm_register_for_event") &&
lookup_symbol(dl, (void *) &e->dm_unregister_for_event,
"dm_unregister_for_event") &&
lookup_symbol(dl, (void *) &e->dm_get_registered_device,
"dm_get_registered_device") &&
lookup_symbol(dl, (void *) &e->dm_set_event_timeout,
"dm_set_event_timeout") &&
lookup_symbol(dl, (void *) &e->dm_get_event_timeout,
"dm_get_event_timeout");
}
int main(int argc, char **argv)
{
void *dl;
struct event_ops e;
int list = 0, next = 0, ret, reg = default_reg;
char *device, *device_arg = NULL, *dso_name, *dso_name_arg = NULL;
if (!parse_argv(argc, argv, &dso_name_arg, &device_arg, &reg, &list))
exit(EXIT_FAILURE);
if (device_arg) {
if (!(device = strdup(device_arg)))
exit(EXIT_FAILURE);
} else
device = NULL;
if (dso_name_arg) {
if (!(dso_name = strdup(dso_name_arg)))
exit(EXIT_FAILURE);
} else {
if (!(dso_name = strdup(default_dso_name)))
exit(EXIT_FAILURE);
}
/* FIXME: use -v/-q options to set this */
multilog_add_type(standard, NULL);
multilog_init_verbose(standard, _LOG_DEBUG);
if (!(dl = dlopen("libdmevent.so", RTLD_NOW))){
fprintf(stderr, "Cannot dlopen libdmevent.so: %s\n", dlerror());
goto out;
}
if (!(lookup_symbols(dl, &e)))
goto out;
if (list) {
while (1) {
if ((ret= e.dm_get_registered_device(&dso_name,
&device,
&events, next)))
break;
printf("%s %s 0x%x", dso_name, device, events);
if (events & TIMEOUT){
if ((ret = e.dm_get_event_timeout(device,
&timeout))) {
ret = EXIT_FAILURE;
goto out;
}
printf(" %"PRIu32"\n", timeout);
} else
printf("\n");
if (device_arg)
break;
next = 1;
}
ret = (ret && device_arg) ? EXIT_FAILURE : EXIT_SUCCESS;
goto out;
}
if ((ret = reg ? e.dm_register_for_event(dso_name, device, events) :
e.dm_unregister_for_event(dso_name, device, events))) {
fprintf(stderr, "Failed to %sregister %s: %s\n",
reg ? "": "un", device, strerror(-ret));
ret = EXIT_FAILURE;
} else {
if (reg && (events & TIMEOUT) &&
((ret = e.dm_set_event_timeout(device, timeout)))){
fprintf(stderr, "Failed to set timeout for %s: %s\n",
device, strerror(-ret));
ret = EXIT_FAILURE;
} else {
printf("%s %sregistered successfully.\n",
device, reg ? "" : "un");
ret = EXIT_SUCCESS;
}
}
out:
multilog_del_type(standard);
free(device);
free(dso_name);
exit(ret);
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,12 @@
#!/bin/sh
#
# Create test devices for dmeventd
#
trap "rm -f /tmp/tmp.$$" 0 1 2 3 15
echo "0 1024 zero" > /tmp/tmp.$$
dmsetup create test /tmp/tmp.$$
dmsetup create test1 /tmp/tmp.$$
kill -15 $$

View File

@ -0,0 +1,39 @@
/*
* Copyright (C) 2005 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "libdm-event.h"
#include "libmultilog.h"
void process_event(char *device, enum event_type event)
{
log_err("[%s] %s(%d) - Device: %s, Event %d\n",
__FILE__, __func__, __LINE__, device, event);
}
int register_device(char *device)
{
log_err("[%s] %s(%d) - Device: %s\n",
__FILE__, __func__, __LINE__, device);
return 1;
}
int unregister_device(char *device)
{
log_err("[%s] %s(%d) - Device: %s\n",
__FILE__, __func__, __LINE__, device);
return 1;
}

View File

@ -0,0 +1 @@
noop.o noop.d noop.pot: ../make.tmpl ../VERSION Makefile ../include/.symlinks_created noop.c ../include/libdm-event.h ../include/libmultilog.h

View File

@ -0,0 +1,51 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
TARGETS = dmsetup
INSTALL_TYPE = install_dynamic
LIB_PTHREAD = @LIB_PTHREAD@
ifeq ("@STATIC_LINK@", "yes")
TARGETS += dmsetup.static
INSTALL_TYPE += install_static
endif
SOURCES = dmsetup.c
CLEAN_TARGETS = dmsetup dmsetup.static
include ../make.tmpl
dmsetup: $(OBJECTS) $(interfacedir)/libdevmapper.$(LIB_SUFFIX)
$(CC) -o $@ $(OBJECTS) $(CFLAGS) $(LDFLAGS) \
-L$(interfacedir) -L$(DESTDIR)/lib -ldevmapper $(LIBS)
dmsetup.static: $(OBJECTS) $(interfacedir)/libdevmapper.a
$(CC) -o $@ $(OBJECTS) $(CFLAGS) $(LDFLAGS) -static \
-L$(interfacedir) -L$(DESTDIR)/lib -ldevmapper $(LIBS) \
$(LIB_PTHREAD)
install: $(INSTALL_TYPE)
.PHONY: install_dynamic install_static
install_dynamic: dmsetup
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< $(sbindir)/$<
install_static: dmsetup.static
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< $(sbindir)/$<

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,4 @@
../lib/libdevmapper.h
../dmeventd/libdevmapper-event.h
../multilog/libmultilog.h
../po/pogen.h

View File

@ -0,0 +1,44 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
SHELL = /bin/sh
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
LN_S = @LN_S@
.PHONY: clean distclean all install pofile
all: .symlinks_created
.symlinks_created: .symlinks Makefile
find . -maxdepth 2 -type l -exec $(RM) \{\} \;
for i in `cat .symlinks`; do $(LN_S) $$i ; done
touch $@
ifeq ("@missingkernel@", "yes")
$(LN_S) ../../kernel/ioctl/dm-ioctl.h linux
endif
distclean:
find . -maxdepth 2 -type l -exec $(RM) \{\} \;
$(RM) Makefile .include_symlinks .symlinks_created configure.h
pofile: all
clean:
install:

View File

@ -0,0 +1,282 @@
/* include/configure.h.in. Generated from configure.in by autoheader. */
/* Define to 1 if the `closedir' function returns void instead of `int'. */
#undef CLOSEDIR_VOID
/* Define to one of `_getb67', `GETB67', `getb67' for Cray-2 and Cray-YMP
systems. This function is required for `alloca.c' support on those systems.
*/
#undef CRAY_STACKSEG_END
/* Define to 1 if using `alloca.c'. */
#undef C_ALLOCA
/* Path to dmeventd binary. */
#undef DMEVENTD_PATH
/* Path to dmeventd pidfile. */
#undef DMEVENTD_PIDFILE
/* Library version */
#undef DM_LIB_VERSION
/* Define to 1 if you have `alloca', as a function or macro. */
#undef HAVE_ALLOCA
/* Define to 1 if you have <alloca.h> and it should be used (not on Ultrix).
*/
#undef HAVE_ALLOCA_H
/* Define to 1 if canonicalize_file_name is available. */
#undef HAVE_CANONICALIZE_FILE_NAME
/* Define to 1 if you have the <ctype.h> header file. */
#undef HAVE_CTYPE_H
/* Define to 1 if you have the <dirent.h> header file. */
#undef HAVE_DIRENT_H
/* Define to 1 if you don't have `vprintf' but do have `_doprnt.' */
#undef HAVE_DOPRNT
/* Define to 1 if you have the <errno.h> header file. */
#undef HAVE_ERRNO_H
/* Define to 1 if you have the <fcntl.h> header file. */
#undef HAVE_FCNTL_H
/* Define to 1 if you have the `fork' function. */
#undef HAVE_FORK
/* Define to 1 if you have the `gethostname' function. */
#undef HAVE_GETHOSTNAME
/* Define to 1 if getline is available. */
#undef HAVE_GETLINE
/* Define to 1 if getopt_long is available. */
#undef HAVE_GETOPTLONG
/* Define to 1 if you have the <getopt.h> header file. */
#undef HAVE_GETOPT_H
/* Define to 1 if you have the `getpagesize' function. */
#undef HAVE_GETPAGESIZE
/* Define to 1 if you have the <inttypes.h> header file. */
#undef HAVE_INTTYPES_H
/* Define to 1 if you have the `readline' library (-lreadline). */
#undef HAVE_LIBREADLINE
/* Define to 1 if you have the <limits.h> header file. */
#undef HAVE_LIMITS_H
/* Define to 1 if `lstat' has the bug that it succeeds when given the
zero-length file name argument. */
#undef HAVE_LSTAT_EMPTY_STRING_BUG
/* Define to 1 if your system has a GNU libc compatible `malloc' function, and
to 0 otherwise. */
#undef HAVE_MALLOC
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the `memset' function. */
#undef HAVE_MEMSET
/* Define to 1 if you have the `mkdir' function. */
#undef HAVE_MKDIR
/* Define to 1 if you have a working `mmap' system call. */
#undef HAVE_MMAP
/* Define to 1 if you have the `munmap' function. */
#undef HAVE_MUNMAP
/* Define to 1 if you have the <ndir.h> header file, and it defines `DIR'. */
#undef HAVE_NDIR_H
/* Define to 1 if rl_completion_matches() is available. */
#undef HAVE_RL_COMPLETION_MATCHES
/* Define to 1 if you have the `rmdir' function. */
#undef HAVE_RMDIR
/* Define to 1 to include support for selinux. */
#undef HAVE_SELINUX
/* Define to 1 if sepol_check_context is available. */
#undef HAVE_SEPOL
/* Define to 1 if you have the `setlocale' function. */
#undef HAVE_SETLOCALE
/* Define to 1 if `stat' has the bug that it succeeds when given the
zero-length file name argument. */
#undef HAVE_STAT_EMPTY_STRING_BUG
/* Define to 1 if you have the <stdarg.h> header file. */
#undef HAVE_STDARG_H
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdio.h> header file. */
#undef HAVE_STDIO_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
/* Define to 1 if you have the `strcasecmp' function. */
#undef HAVE_STRCASECMP
/* Define to 1 if you have the `strchr' function. */
#undef HAVE_STRCHR
/* Define to 1 if you have the `strdup' function. */
#undef HAVE_STRDUP
/* Define to 1 if you have the `strerror' function. */
#undef HAVE_STRERROR
/* Define to 1 if you have the <strings.h> header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the <string.h> header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the `strncasecmp' function. */
#undef HAVE_STRNCASECMP
/* Define to 1 if you have the `strrchr' function. */
#undef HAVE_STRRCHR
/* Define to 1 if you have the `strstr' function. */
#undef HAVE_STRSTR
/* Define to 1 if you have the `strtol' function. */
#undef HAVE_STRTOL
/* Define to 1 if you have the `strtoul' function. */
#undef HAVE_STRTOUL
/* Define to 1 if `st_rdev' is member of `struct stat'. */
#undef HAVE_STRUCT_STAT_ST_RDEV
/* Define to 1 if you have the <sys/dir.h> header file, and it defines `DIR'.
*/
#undef HAVE_SYS_DIR_H
/* Define to 1 if you have the <sys/ioctl.h> header file. */
#undef HAVE_SYS_IOCTL_H
/* Define to 1 if you have the <sys/ndir.h> header file, and it defines `DIR'.
*/
#undef HAVE_SYS_NDIR_H
/* Define to 1 if you have the <sys/param.h> header file. */
#undef HAVE_SYS_PARAM_H
/* Define to 1 if you have the <sys/statvfs.h> header file. */
#undef HAVE_SYS_STATVFS_H
/* Define to 1 if you have the <sys/stat.h> header file. */
#undef HAVE_SYS_STAT_H
/* Define to 1 if you have the <sys/types.h> header file. */
#undef HAVE_SYS_TYPES_H
/* Define to 1 if you have <sys/wait.h> that is POSIX.1 compatible. */
#undef HAVE_SYS_WAIT_H
/* Define to 1 if you have the <termios.h> header file. */
#undef HAVE_TERMIOS_H
/* Define to 1 if you have the `uname' function. */
#undef HAVE_UNAME
/* Define to 1 if you have the <unistd.h> header file. */
#undef HAVE_UNISTD_H
/* Define to 1 if you have the `vfork' function. */
#undef HAVE_VFORK
/* Define to 1 if you have the <vfork.h> header file. */
#undef HAVE_VFORK_H
/* Define to 1 if you have the `vprintf' function. */
#undef HAVE_VPRINTF
/* Define to 1 if `fork' works. */
#undef HAVE_WORKING_FORK
/* Define to 1 if `vfork' works. */
#undef HAVE_WORKING_VFORK
/* Define to 1 if `lstat' dereferences a symlink specified with a trailing
slash. */
#undef LSTAT_FOLLOWS_SLASHED_SYMLINK
/* Define to the address where bug reports for this package should be sent. */
#undef PACKAGE_BUGREPORT
/* Define to the full name of this package. */
#undef PACKAGE_NAME
/* Define to the full name and version of this package. */
#undef PACKAGE_STRING
/* Define to the one symbol short name of this package. */
#undef PACKAGE_TARNAME
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define as the return type of signal handlers (`int' or `void'). */
#undef RETSIGTYPE
/* If using the C implementation of alloca, define if you know the
direction of stack growth for your system; otherwise it will be
automatically deduced at runtime.
STACK_DIRECTION > 0 => grows toward higher addresses
STACK_DIRECTION < 0 => grows toward lower addresses
STACK_DIRECTION = 0 => direction of growth unknown */
#undef STACK_DIRECTION
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
#undef TIME_WITH_SYS_TIME
/* Define to 1 if your <sys/time.h> declares `struct tm'. */
#undef TM_IN_SYS_TIME
/* Define to empty if `const' does not conform to ANSI C. */
#undef const
/* Define to `__inline__' or `__inline' if that's what the C compiler
calls it, or to nothing if 'inline' is not supported under any name. */
#ifndef __cplusplus
#undef inline
#endif
/* Define to rpl_malloc if the replacement function should be used. */
#undef malloc
/* Define to `int' if <sys/types.h> does not define. */
#undef mode_t
/* Define to `long int' if <sys/types.h> does not define. */
#undef off_t
/* Define to `int' if <sys/types.h> does not define. */
#undef pid_t
/* Define to `unsigned int' if <sys/types.h> does not define. */
#undef size_t
/* Define as `fork' if `vfork' does not work. */
#undef vfork

View File

@ -0,0 +1,26 @@
/*
* Copyright (C) 2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef _DM_INTL_H
#define _DM_INTL_H
#ifdef INTL_PACKAGE
# include <libintl.h>
# define _(String) dgettext(INTL_PACKAGE, (String))
#else
# define _(String) (String)
#endif
#endif

View File

@ -0,0 +1,22 @@
/*
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef _DM_KDEV_H
#define _DM_KDEV_H
#define MAJOR(dev) ((dev & 0xfff00) >> 8)
#define MINOR(dev) ((dev & 0xff) | ((dev >> 12) & 0xfff00))
#define MKDEV(ma,mi) ((mi & 0xff) | (ma << 8) | ((mi & ~0xff) << 12))
#endif

View File

@ -0,0 +1,42 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
/*
* This file must be included first by every library source file.
*/
#ifndef _DM_LIB_H
#define _DM_LIB_H
#define _REENTRANT
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <configure.h>
#include "log.h"
#include "intl.h"
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
/* Define some portable printing types */
#define PRIsize_t "zu"
#endif

View File

@ -0,0 +1,268 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef _DM_LIST_H
#define _DM_LIST_H
#include <assert.h>
#include <stdio.h>
/*
* A list consists of a list head plus elements.
* Each element has 'next' and 'previous' pointers.
* The list head's pointers point to the first and the last element.
*/
struct list {
struct list *n, *p;
};
/*
* Initialise a list before use.
* The list head's next and previous pointers point back to itself.
*/
#define LIST_INIT(name) struct list name = { &(name), &(name) }
static inline void list_init(struct list *head)
{
head->n = head->p = head;
}
/*
* Insert an element before 'head'.
* If 'head' is the list head, this adds an element to the end of the list.
*/
static inline void list_add(struct list *head, struct list *elem)
{
assert(head->n);
elem->n = head;
elem->p = head->p;
head->p->n = elem;
head->p = elem;
}
/*
* Insert an element after 'head'.
* If 'head' is the list head, this adds an element to the front of the list.
*/
static inline void list_add_h(struct list *head, struct list *elem)
{
assert(head->n);
elem->n = head->n;
elem->p = head;
head->n->p = elem;
head->n = elem;
}
/*
* Delete an element from its list.
* Note that this doesn't change the element itself - it may still be safe
* to follow its pointers.
*/
static inline void list_del(struct list *elem)
{
elem->n->p = elem->p;
elem->p->n = elem->n;
}
/*
* Remove an element from existing list and insert before 'head'.
*/
static inline void list_move(struct list *head, struct list *elem)
{
list_del(elem);
list_add(head, elem);
}
/*
* Is the list empty?
*/
static inline int list_empty(const struct list *head)
{
return head->n == head;
}
/*
* Is this the first element of the list?
*/
static inline int list_start(const struct list *head, const struct list *elem)
{
return elem->p == head;
}
/*
* Is this the last element of the list?
*/
static inline int list_end(const struct list *head, const struct list *elem)
{
return elem->n == head;
}
/*
* Return first element of the list or NULL if empty
*/
static inline struct list *list_first(const struct list *head)
{
return (list_empty(head) ? NULL : head->n);
}
/*
* Return last element of the list or NULL if empty
*/
static inline struct list *list_last(const struct list *head)
{
return (list_empty(head) ? NULL : head->p);
}
/*
* Return the previous element of the list, or NULL if we've reached the start.
*/
static inline struct list *list_prev(const struct list *head, const struct list *elem)
{
return (list_start(head, elem) ? NULL : elem->p);
}
/*
* Return the next element of the list, or NULL if we've reached the end.
*/
static inline struct list *list_next(const struct list *head, const struct list *elem)
{
return (list_end(head, elem) ? NULL : elem->n);
}
/*
* Given the address v of an instance of 'struct list' called 'head'
* contained in a structure of type t, return the containing structure.
*/
#define list_struct_base(v, t, head) \
((t *)((uintptr_t)(v) - (uintptr_t)&((t *) 0)->head))
/*
* Given the address v of an instance of 'struct list list' contained in
* a structure of type t, return the containing structure.
*/
#define list_item(v, t) list_struct_base((v), t, list)
/*
* Given the address v of one known element e in a known structure of type t,
* return another element f.
*/
#define struct_field(v, t, e, f) \
(((t *)((uintptr_t)(v) - (uintptr_t)&((t *) 0)->e))->f)
/*
* Given the address v of a known element e in a known structure of type t,
* return the list head 'list'
*/
#define list_head(v, t, e) struct_field(v, t, e, list)
/*
* Set v to each element of a list in turn.
*/
#define list_iterate(v, head) \
for (v = (head)->n; v != head; v = v->n)
/*
* Set v to each element in a list in turn, starting from the element
* in front of 'start'.
* You can use this to 'unwind' a list_iterate and back out actions on
* already-processed elements.
* If 'start' is 'head' it walks the list backwards.
*/
#define list_uniterate(v, head, start) \
for (v = (start)->p; v != head; v = v->p)
/*
* A safe way to walk a list and delete and free some elements along
* the way.
* t must be defined as a temporary variable of the same type as v.
*/
#define list_iterate_safe(v, t, head) \
for (v = (head)->n, t = v->n; v != head; v = t, t = v->n)
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The 'struct list' variable within the containing structure is 'field'.
*/
#define list_iterate_items_gen(v, head, field) \
for (v = list_struct_base((head)->n, typeof(*v), field); \
&v->field != (head); \
v = list_struct_base(v->field.n, typeof(*v), field))
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct list list' within the containing structure.
*/
#define list_iterate_items(v, head) list_iterate_items_gen(v, (head), list)
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The 'struct list' variable within the containing structure is 'field'.
* t must be defined as a temporary variable of the same type as v.
*/
#define list_iterate_items_gen_safe(v, t, head, field) \
for (v = list_struct_base((head)->n, typeof(*v), field), \
t = list_struct_base(v->field.n, typeof(*v), field); \
&v->field != (head); \
v = t, t = list_struct_base(v->field.n, typeof(*v), field))
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct list list' within the containing structure.
* t must be defined as a temporary variable of the same type as v.
*/
#define list_iterate_items_safe(v, t, head) \
list_iterate_items_gen_safe(v, t, (head), list)
/*
* Walk a list backwards, setting 'v' in turn to the containing structure
* of each item.
* The containing structure should be the same type as 'v'.
* The 'struct list' variable within the containing structure is 'field'.
*/
#define list_iterate_back_items_gen(v, head, field) \
for (v = list_struct_base((head)->p, typeof(*v), field); \
&v->field != (head); \
v = list_struct_base(v->field.p, typeof(*v), field))
/*
* Walk a list backwards, setting 'v' in turn to the containing structure
* of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct list list' within the containing structure.
*/
#define list_iterate_back_items(v, head) list_iterate_back_items_gen(v, (head), list)
/*
* Return the number of elements in a list by walking it.
*/
static inline unsigned int list_size(const struct list *head)
{
unsigned int s = 0;
const struct list *v;
list_iterate(v, head)
s++;
return s;
}
#endif

View File

@ -0,0 +1,56 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef _DM_LOG_H
#define _DM_LOG_H
#include "libdevmapper.h"
#define _LOG_STDERR 128 /* force things to go to stderr, even if loglevel
would make them go to stdout */
#define _LOG_DEBUG 7
#define _LOG_INFO 6
#define _LOG_NOTICE 5
#define _LOG_WARN 4
#define _LOG_ERR 3
#define _LOG_FATAL 2
extern dm_log_fn dm_log;
#define plog(l, x...) dm_log(l, __FILE__, __LINE__, ## x)
#define log_error(x...) plog(_LOG_ERR, x)
#define log_print(x...) plog(_LOG_WARN, x)
#define log_warn(x...) plog(_LOG_WARN | _LOG_STDERR, x)
#define log_verbose(x...) plog(_LOG_NOTICE, x)
#define log_very_verbose(x...) plog(_LOG_INFO, x)
#define log_debug(x...) plog(_LOG_DEBUG, x)
/* System call equivalents */
#define log_sys_error(x, y) \
log_error("%s: %s failed: %s", y, x, strerror(errno))
#define log_sys_very_verbose(x, y) \
log_info("%s: %s failed: %s", y, x, strerror(errno))
#define log_sys_debug(x, y) \
log_debug("%s: %s failed: %s", y, x, strerror(errno))
#define stack log_debug("<backtrace>") /* Backtrace on error */
#define return_0 do { stack; return 0; } while (0)
#define return_NULL do { stack; return NULL; } while (0)
#define goto_out do { stack; goto out; } while (0)
#define goto_bad do { stack; goto bad; } while (0)
#endif

View File

@ -0,0 +1,126 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
SHELL = /bin/sh
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
interface = @interface@
kerneldir = @kerneldir@
kernelvsn = @kernelvsn@
tmpdir=@tmpdir@
LN_S = @LN_S@
FS=dmfs-error.c dmfs-lv.c dmfs-root.c dmfs-status.c \
dmfs-super.c dmfs-suspend.c dmfs-table.c dmfs.h
COMMON=dm-linear.c dm-stripe.c \
dm-snapshot.c kcopyd.c \
dm-table.c dm-target.c dm.c dm.h dm-snapshot.h \
dm-exception-store.c kcopyd.h \
dm-io.c dm-io.h dm-log.c dm-log.h dm-daemon.c dm-daemon.h dm-raid1.c
IOCTL=dm-ioctl.c
common_hdrs=common/device-mapper.h
ioctl_hdr=ioctl/dm-ioctl.h
KERNELTAR=/usr/src/linux-$(kernelvsn).tar
fs=$(patsubst %,fs/%,$(FS))
common=$(patsubst %,common/%,$(COMMON))
ioctl=$(patsubst %,ioctl/%,$(IOCTL))
.PHONY: install clean distclean all symlinks patches patches-clean
all: symlinks
symlinks:
for i in $(common) $(fs) $(ioctl); do \
if [ -L $(kerneldir)/drivers/md/`basename $$i` ] ; \
then $(RM) $(kerneldir)/drivers/md/`basename $$i`; \
fi; \
done
for i in $(common) $($(interface)) ; do \
$(LN_S) `pwd`/$$i $(kerneldir)/drivers/md ; \
done
for i in $(common_hdrs) $(ioctl_hdr) ; do \
if [ -L $(kerneldir)/include/linux/`basename $$i` ] ; \
then $(RM) \
$(kerneldir)/include/linux/`basename $$i`; \
fi; \
done
for i in $(common_hdrs) ; do \
$(LN_S) `pwd`/$$i $(kerneldir)/include/linux ; \
done
if [ "$(interface)" == "ioctl" ] ; then \
$(LN_S) `pwd`/$(ioctl_hdr) $(kerneldir)/include/linux ; \
fi
patches:
if [ ! -e $(KERNELTAR) ] ; then \
echo "Can't find kernel source tarball $(KERNELTAR)" ; \
exit 1; \
fi
if [ ! -e $(tmpdir) ] ; then mkdir $(tmpdir); fi
if [ ! -d $(tmpdir) ] ; then \
echo "Working directory $(tmpdir) missing" ; \
exit 1; \
fi
( \
cd $(tmpdir); \
tar xf $(KERNELTAR) ; \
cp -al linux linux-$(kernelvsn) ; \
)
for i in $(common) $($(interface)) ; do \
$(LN_S) `pwd`/$$i $(tmpdir)/linux/drivers/md ; \
done
for i in $(common_hdrs) ; do \
$(LN_S) `pwd`/$$i $(tmpdir)/linux/include/linux ; \
done
if [ "$(interface)" == "ioctl" ] ; then \
$(LN_S) `pwd`/$(ioctl_hdr) $(tmpdir)/linux/include/linux ; \
fi
for i in ../patches/common/linux-$(kernelvsn)* \
../patches/$(interface)/linux-$(kernelvsn)* ; do \
patch -d $(tmpdir)/linux -p1 -i `pwd`/$$i ; \
done
( \
cd $(tmpdir); \
diff -ruN linux-$(kernelvsn) linux ; \
exit 0; \
) > ../patches/linux-$(kernelvsn)-devmapper-$(interface).patch
$(RM) -r $(tmpdir)
patches-clean:
$(RM) -r $(tmpdir)
$(RM) ../patches/linux-$(kernelvsn)-devmapper-$(interface).patch
install:
clean:
distclean:
$(RM) Makefile

View File

@ -0,0 +1,105 @@
/*
* Copyright (C) 2001 Sistina Software (UK) Limited.
*
* This file is released under the LGPL.
*/
#ifndef _LINUX_DEVICE_MAPPER_H
#define _LINUX_DEVICE_MAPPER_H
typedef unsigned long sector_t;
struct dm_target;
struct dm_table;
struct dm_dev;
typedef enum { STATUSTYPE_INFO, STATUSTYPE_TABLE } status_type_t;
union map_info {
void *ptr;
unsigned long long ll;
};
/*
* In the constructor the target parameter will already have the
* table, type, begin and len fields filled in.
*/
typedef int (*dm_ctr_fn) (struct dm_target * target, unsigned int argc,
char **argv);
/*
* The destructor doesn't need to free the dm_target, just
* anything hidden ti->private.
*/
typedef void (*dm_dtr_fn) (struct dm_target * ti);
/*
* The map function must return:
* < 0: error
* = 0: The target will handle the io by resubmitting it later
* > 0: simple remap complete
*/
typedef int (*dm_map_fn) (struct dm_target * ti, struct buffer_head * bh,
int rw, union map_info *map_context);
/*
* Returns:
* < 0 : error (currently ignored)
* 0 : ended successfully
* 1 : for some reason the io has still not completed (eg,
* multipath target might want to requeue a failed io).
*/
typedef int (*dm_endio_fn) (struct dm_target * ti,
struct buffer_head * bh, int rw, int error,
union map_info *map_context);
typedef void (*dm_suspend_fn) (struct dm_target *ti);
typedef void (*dm_resume_fn) (struct dm_target *ti);
typedef int (*dm_status_fn) (struct dm_target * ti, status_type_t status_type,
char *result, unsigned int maxlen);
void dm_error(const char *message);
/*
* Constructors should call these functions to ensure destination devices
* are opened/closed correctly.
* FIXME: too many arguments.
*/
int dm_get_device(struct dm_target *ti, const char *path, sector_t start,
sector_t len, int mode, struct dm_dev **result);
void dm_put_device(struct dm_target *ti, struct dm_dev *d);
/*
* Information about a target type
*/
struct target_type {
const char *name;
struct module *module;
unsigned version[3];
dm_ctr_fn ctr;
dm_dtr_fn dtr;
dm_map_fn map;
dm_endio_fn end_io;
dm_suspend_fn suspend;
dm_resume_fn resume;
dm_status_fn status;
};
struct dm_target {
struct dm_table *table;
struct target_type *type;
/* target limits */
sector_t begin;
sector_t len;
/* target specific data */
void *private;
/* Used to provide an error string from the ctr */
char *error;
};
int dm_register_target(struct target_type *t);
int dm_unregister_target(struct target_type *t);
#endif /* _LINUX_DEVICE_MAPPER_H */

View File

@ -0,0 +1,113 @@
/*
* Copyright (C) 2003 Sistina Software
*
* This file is released under the LGPL.
*/
#include "dm.h"
#include "dm-daemon.h"
#include <linux/module.h>
#include <linux/sched.h>
static int daemon(void *arg)
{
struct dm_daemon *dd = (struct dm_daemon *) arg;
DECLARE_WAITQUEUE(wq, current);
daemonize();
reparent_to_init();
/* block all signals */
spin_lock_irq(&current->sigmask_lock);
sigfillset(&current->blocked);
flush_signals(current);
spin_unlock_irq(&current->sigmask_lock);
strcpy(current->comm, dd->name);
atomic_set(&dd->please_die, 0);
add_wait_queue(&dd->job_queue, &wq);
down(&dd->run_lock);
up(&dd->start_lock);
/*
* dd->fn() could do anything, very likely it will
* suspend. So we can't set the state to
* TASK_INTERRUPTIBLE before calling it. In order to
* prevent a race with a waking thread we do this little
* dance with the dd->woken variable.
*/
while (1) {
do {
set_current_state(TASK_RUNNING);
if (atomic_read(&dd->please_die))
goto out;
atomic_set(&dd->woken, 0);
dd->fn();
yield();
set_current_state(TASK_INTERRUPTIBLE);
} while (atomic_read(&dd->woken));
schedule();
}
out:
remove_wait_queue(&dd->job_queue, &wq);
up(&dd->run_lock);
return 0;
}
int dm_daemon_start(struct dm_daemon *dd, const char *name, void (*fn)(void))
{
pid_t pid = 0;
/*
* Initialise the dm_daemon.
*/
dd->fn = fn;
strncpy(dd->name, name, sizeof(dd->name) - 1);
sema_init(&dd->start_lock, 1);
sema_init(&dd->run_lock, 1);
init_waitqueue_head(&dd->job_queue);
/*
* Start the new thread.
*/
down(&dd->start_lock);
pid = kernel_thread(daemon, dd, 0);
if (pid <= 0) {
DMERR("Failed to start %s thread", name);
return -EAGAIN;
}
/*
* wait for the daemon to up this mutex.
*/
down(&dd->start_lock);
up(&dd->start_lock);
return 0;
}
void dm_daemon_stop(struct dm_daemon *dd)
{
atomic_set(&dd->please_die, 1);
dm_daemon_wake(dd);
down(&dd->run_lock);
up(&dd->run_lock);
}
void dm_daemon_wake(struct dm_daemon *dd)
{
atomic_set(&dd->woken, 1);
wake_up_interruptible(&dd->job_queue);
}
EXPORT_SYMBOL(dm_daemon_start);
EXPORT_SYMBOL(dm_daemon_stop);
EXPORT_SYMBOL(dm_daemon_wake);

View File

@ -0,0 +1,29 @@
/*
* Copyright (C) 2003 Sistina Software
*
* This file is released under the LGPL.
*/
#ifndef DM_DAEMON_H
#define DM_DAEMON_H
#include <asm/atomic.h>
#include <asm/semaphore.h>
struct dm_daemon {
void (*fn)(void);
char name[16];
atomic_t please_die;
struct semaphore start_lock;
struct semaphore run_lock;
atomic_t woken;
wait_queue_head_t job_queue;
};
int dm_daemon_start(struct dm_daemon *dd, const char *name, void (*fn)(void));
void dm_daemon_stop(struct dm_daemon *dd);
void dm_daemon_wake(struct dm_daemon *dd);
int dm_daemon_running(struct dm_daemon *dd);
#endif

View File

@ -0,0 +1,673 @@
/*
* dm-snapshot.c
*
* Copyright (C) 2001-2002 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include "dm-snapshot.h"
#include "dm-io.h"
#include "kcopyd.h"
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
/*-----------------------------------------------------------------
* Persistent snapshots, by persistent we mean that the snapshot
* will survive a reboot.
*---------------------------------------------------------------*/
/*
* We need to store a record of which parts of the origin have
* been copied to the snapshot device. The snapshot code
* requires that we copy exception chunks to chunk aligned areas
* of the COW store. It makes sense therefore, to store the
* metadata in chunk size blocks.
*
* There is no backward or forward compatibility implemented,
* snapshots with different disk versions than the kernel will
* not be usable. It is expected that "lvcreate" will blank out
* the start of a fresh COW device before calling the snapshot
* constructor.
*
* The first chunk of the COW device just contains the header.
* After this there is a chunk filled with exception metadata,
* followed by as many exception chunks as can fit in the
* metadata areas.
*
* All on disk structures are in little-endian format. The end
* of the exceptions info is indicated by an exception with a
* new_chunk of 0, which is invalid since it would point to the
* header chunk.
*/
/*
* Magic for persistent snapshots: "SnAp" - Feeble isn't it.
*/
#define SNAP_MAGIC 0x70416e53
/*
* The on-disk version of the metadata.
*/
#define SNAPSHOT_DISK_VERSION 1
struct disk_header {
uint32_t magic;
/*
* Is this snapshot valid. There is no way of recovering
* an invalid snapshot.
*/
uint32_t valid;
/*
* Simple, incrementing version. no backward
* compatibility.
*/
uint32_t version;
/* In sectors */
uint32_t chunk_size;
};
struct disk_exception {
uint64_t old_chunk;
uint64_t new_chunk;
};
struct commit_callback {
void (*callback)(void *, int success);
void *context;
};
/*
* The top level structure for a persistent exception store.
*/
struct pstore {
struct dm_snapshot *snap; /* up pointer to my snapshot */
int version;
int valid;
uint32_t chunk_size;
uint32_t exceptions_per_area;
/*
* Now that we have an asynchronous kcopyd there is no
* need for large chunk sizes, so it wont hurt to have a
* whole chunks worth of metadata in memory at once.
*/
void *area;
/*
* Used to keep track of which metadata area the data in
* 'chunk' refers to.
*/
uint32_t current_area;
/*
* The next free chunk for an exception.
*/
uint32_t next_free;
/*
* The index of next free exception in the current
* metadata area.
*/
uint32_t current_committed;
atomic_t pending_count;
uint32_t callback_count;
struct commit_callback *callbacks;
};
static inline unsigned int sectors_to_pages(unsigned int sectors)
{
return sectors / (PAGE_SIZE / SECTOR_SIZE);
}
static int alloc_area(struct pstore *ps)
{
int r = -ENOMEM;
size_t i, len, nr_pages;
struct page *page, *last = NULL;
len = ps->chunk_size << SECTOR_SHIFT;
/*
* Allocate the chunk_size block of memory that will hold
* a single metadata area.
*/
ps->area = vmalloc(len);
if (!ps->area)
return r;
nr_pages = sectors_to_pages(ps->chunk_size);
/*
* We lock the pages for ps->area into memory since
* they'll be doing a lot of io. We also chain them
* together ready for dm-io.
*/
for (i = 0; i < nr_pages; i++) {
page = vmalloc_to_page(ps->area + (i * PAGE_SIZE));
LockPage(page);
if (last)
last->list.next = &page->list;
last = page;
}
return 0;
}
static void free_area(struct pstore *ps)
{
size_t i, nr_pages;
struct page *page;
nr_pages = sectors_to_pages(ps->chunk_size);
for (i = 0; i < nr_pages; i++) {
page = vmalloc_to_page(ps->area + (i * PAGE_SIZE));
page->list.next = NULL;
UnlockPage(page);
}
vfree(ps->area);
}
/*
* Read or write a chunk aligned and sized block of data from a device.
*/
static int chunk_io(struct pstore *ps, uint32_t chunk, int rw)
{
struct io_region where;
unsigned int bits;
where.dev = ps->snap->cow->dev;
where.sector = ps->chunk_size * chunk;
where.count = ps->chunk_size;
return dm_io_sync(1, &where, rw, vmalloc_to_page(ps->area), 0, &bits);
}
/*
* Read or write a metadata area. Remembering to skip the first
* chunk which holds the header.
*/
static int area_io(struct pstore *ps, uint32_t area, int rw)
{
int r;
uint32_t chunk;
/* convert a metadata area index to a chunk index */
chunk = 1 + ((ps->exceptions_per_area + 1) * area);
r = chunk_io(ps, chunk, rw);
if (r)
return r;
ps->current_area = area;
return 0;
}
static int zero_area(struct pstore *ps, uint32_t area)
{
memset(ps->area, 0, ps->chunk_size << SECTOR_SHIFT);
return area_io(ps, area, WRITE);
}
static int read_header(struct pstore *ps, int *new_snapshot)
{
int r;
struct disk_header *dh;
r = chunk_io(ps, 0, READ);
if (r)
return r;
dh = (struct disk_header *) ps->area;
if (le32_to_cpu(dh->magic) == 0) {
*new_snapshot = 1;
} else if (le32_to_cpu(dh->magic) == SNAP_MAGIC) {
*new_snapshot = 0;
ps->valid = le32_to_cpu(dh->valid);
ps->version = le32_to_cpu(dh->version);
ps->chunk_size = le32_to_cpu(dh->chunk_size);
} else {
DMWARN("Invalid/corrupt snapshot");
r = -ENXIO;
}
return r;
}
static int write_header(struct pstore *ps)
{
struct disk_header *dh;
memset(ps->area, 0, ps->chunk_size << SECTOR_SHIFT);
dh = (struct disk_header *) ps->area;
dh->magic = cpu_to_le32(SNAP_MAGIC);
dh->valid = cpu_to_le32(ps->valid);
dh->version = cpu_to_le32(ps->version);
dh->chunk_size = cpu_to_le32(ps->chunk_size);
return chunk_io(ps, 0, WRITE);
}
/*
* Access functions for the disk exceptions, these do the endian conversions.
*/
static struct disk_exception *get_exception(struct pstore *ps, uint32_t index)
{
if (index >= ps->exceptions_per_area)
return NULL;
return ((struct disk_exception *) ps->area) + index;
}
static int read_exception(struct pstore *ps,
uint32_t index, struct disk_exception *result)
{
struct disk_exception *e;
e = get_exception(ps, index);
if (!e)
return -EINVAL;
/* copy it */
result->old_chunk = le64_to_cpu(e->old_chunk);
result->new_chunk = le64_to_cpu(e->new_chunk);
return 0;
}
static int write_exception(struct pstore *ps,
uint32_t index, struct disk_exception *de)
{
struct disk_exception *e;
e = get_exception(ps, index);
if (!e)
return -EINVAL;
/* copy it */
e->old_chunk = cpu_to_le64(de->old_chunk);
e->new_chunk = cpu_to_le64(de->new_chunk);
return 0;
}
/*
* Registers the exceptions that are present in the current area.
* 'full' is filled in to indicate if the area has been
* filled.
*/
static int insert_exceptions(struct pstore *ps, int *full)
{
int r;
unsigned int i;
struct disk_exception de;
/* presume the area is full */
*full = 1;
for (i = 0; i < ps->exceptions_per_area; i++) {
r = read_exception(ps, i, &de);
if (r)
return r;
/*
* If the new_chunk is pointing at the start of
* the COW device, where the first metadata area
* is we know that we've hit the end of the
* exceptions. Therefore the area is not full.
*/
if (de.new_chunk == 0LL) {
ps->current_committed = i;
*full = 0;
break;
}
/*
* Keep track of the start of the free chunks.
*/
if (ps->next_free <= de.new_chunk)
ps->next_free = de.new_chunk + 1;
/*
* Otherwise we add the exception to the snapshot.
*/
r = dm_add_exception(ps->snap, de.old_chunk, de.new_chunk);
if (r)
return r;
}
return 0;
}
static int read_exceptions(struct pstore *ps)
{
uint32_t area;
int r, full = 1;
/*
* Keeping reading chunks and inserting exceptions until
* we find a partially full area.
*/
for (area = 0; full; area++) {
r = area_io(ps, area, READ);
if (r)
return r;
r = insert_exceptions(ps, &full);
if (r)
return r;
}
return 0;
}
static inline struct pstore *get_info(struct exception_store *store)
{
return (struct pstore *) store->context;
}
static void persistent_fraction_full(struct exception_store *store,
sector_t *numerator, sector_t *denominator)
{
*numerator = get_info(store)->next_free * store->snap->chunk_size;
*denominator = get_dev_size(store->snap->cow->dev);
}
static void persistent_destroy(struct exception_store *store)
{
struct pstore *ps = get_info(store);
dm_io_put(sectors_to_pages(ps->chunk_size));
vfree(ps->callbacks);
free_area(ps);
kfree(ps);
}
static int persistent_read_metadata(struct exception_store *store)
{
int r, new_snapshot;
struct pstore *ps = get_info(store);
/*
* Read the snapshot header.
*/
r = read_header(ps, &new_snapshot);
if (r)
return r;
/*
* Do we need to setup a new snapshot ?
*/
if (new_snapshot) {
r = write_header(ps);
if (r) {
DMWARN("write_header failed");
return r;
}
r = zero_area(ps, 0);
if (r) {
DMWARN("zero_area(0) failed");
return r;
}
} else {
/*
* Sanity checks.
*/
if (!ps->valid) {
DMWARN("snapshot is marked invalid");
return -EINVAL;
}
if (ps->version != SNAPSHOT_DISK_VERSION) {
DMWARN("unable to handle snapshot disk version %d",
ps->version);
return -EINVAL;
}
/*
* Read the metadata.
*/
r = read_exceptions(ps);
if (r)
return r;
}
return 0;
}
static int persistent_prepare(struct exception_store *store,
struct exception *e)
{
struct pstore *ps = get_info(store);
uint32_t stride;
sector_t size = get_dev_size(store->snap->cow->dev);
/* Is there enough room ? */
if (size < ((ps->next_free + 1) * store->snap->chunk_size))
return -ENOSPC;
e->new_chunk = ps->next_free;
/*
* Move onto the next free pending, making sure to take
* into account the location of the metadata chunks.
*/
stride = (ps->exceptions_per_area + 1);
if ((++ps->next_free % stride) == 1)
ps->next_free++;
atomic_inc(&ps->pending_count);
return 0;
}
static void persistent_commit(struct exception_store *store,
struct exception *e,
void (*callback) (void *, int success),
void *callback_context)
{
int r;
unsigned int i;
struct pstore *ps = get_info(store);
struct disk_exception de;
struct commit_callback *cb;
de.old_chunk = e->old_chunk;
de.new_chunk = e->new_chunk;
write_exception(ps, ps->current_committed++, &de);
/*
* Add the callback to the back of the array. This code
* is the only place where the callback array is
* manipulated, and we know that it will never be called
* multiple times concurrently.
*/
cb = ps->callbacks + ps->callback_count++;
cb->callback = callback;
cb->context = callback_context;
/*
* If there are no more exceptions in flight, or we have
* filled this metadata area we commit the exceptions to
* disk.
*/
if (atomic_dec_and_test(&ps->pending_count) ||
(ps->current_committed == ps->exceptions_per_area)) {
r = area_io(ps, ps->current_area, WRITE);
if (r)
ps->valid = 0;
for (i = 0; i < ps->callback_count; i++) {
cb = ps->callbacks + i;
cb->callback(cb->context, r == 0 ? 1 : 0);
}
ps->callback_count = 0;
}
/*
* Have we completely filled the current area ?
*/
if (ps->current_committed == ps->exceptions_per_area) {
ps->current_committed = 0;
r = zero_area(ps, ps->current_area + 1);
if (r)
ps->valid = 0;
}
}
static void persistent_drop(struct exception_store *store)
{
struct pstore *ps = get_info(store);
ps->valid = 0;
if (write_header(ps))
DMWARN("write header failed");
}
int dm_create_persistent(struct exception_store *store, uint32_t chunk_size)
{
int r;
struct pstore *ps;
r = dm_io_get(sectors_to_pages(chunk_size));
if (r)
return r;
/* allocate the pstore */
ps = kmalloc(sizeof(*ps), GFP_KERNEL);
if (!ps) {
r = -ENOMEM;
goto bad;
}
ps->snap = store->snap;
ps->valid = 1;
ps->version = SNAPSHOT_DISK_VERSION;
ps->chunk_size = chunk_size;
ps->exceptions_per_area = (chunk_size << SECTOR_SHIFT) /
sizeof(struct disk_exception);
ps->next_free = 2; /* skipping the header and first area */
ps->current_committed = 0;
r = alloc_area(ps);
if (r)
goto bad;
/*
* Allocate space for all the callbacks.
*/
ps->callback_count = 0;
atomic_set(&ps->pending_count, 0);
ps->callbacks = vcalloc(ps->exceptions_per_area,
sizeof(*ps->callbacks));
if (!ps->callbacks) {
r = -ENOMEM;
goto bad;
}
store->destroy = persistent_destroy;
store->read_metadata = persistent_read_metadata;
store->prepare_exception = persistent_prepare;
store->commit_exception = persistent_commit;
store->drop_snapshot = persistent_drop;
store->fraction_full = persistent_fraction_full;
store->context = ps;
return 0;
bad:
dm_io_put(sectors_to_pages(chunk_size));
if (ps) {
if (ps->callbacks)
vfree(ps->callbacks);
kfree(ps);
}
return r;
}
/*-----------------------------------------------------------------
* Implementation of the store for non-persistent snapshots.
*---------------------------------------------------------------*/
struct transient_c {
sector_t next_free;
};
void transient_destroy(struct exception_store *store)
{
kfree(store->context);
}
int transient_read_metadata(struct exception_store *store)
{
return 0;
}
int transient_prepare(struct exception_store *store, struct exception *e)
{
struct transient_c *tc = (struct transient_c *) store->context;
sector_t size = get_dev_size(store->snap->cow->dev);
if (size < (tc->next_free + store->snap->chunk_size))
return -1;
e->new_chunk = sector_to_chunk(store->snap, tc->next_free);
tc->next_free += store->snap->chunk_size;
return 0;
}
void transient_commit(struct exception_store *store,
struct exception *e,
void (*callback) (void *, int success),
void *callback_context)
{
/* Just succeed */
callback(callback_context, 1);
}
static void transient_fraction_full(struct exception_store *store,
sector_t *numerator, sector_t *denominator)
{
*numerator = ((struct transient_c *) store->context)->next_free;
*denominator = get_dev_size(store->snap->cow->dev);
}
int dm_create_transient(struct exception_store *store,
struct dm_snapshot *s, int blocksize)
{
struct transient_c *tc;
memset(store, 0, sizeof(*store));
store->destroy = transient_destroy;
store->read_metadata = transient_read_metadata;
store->prepare_exception = transient_prepare;
store->commit_exception = transient_commit;
store->fraction_full = transient_fraction_full;
store->snap = s;
tc = kmalloc(sizeof(struct transient_c), GFP_KERNEL);
if (!tc)
return -ENOMEM;
tc->next_free = 0;
store->context = tc;
return 0;
}

View File

@ -0,0 +1,361 @@
/*
* Copyright (C) 2003 Sistina Software
*
* This file is released under the GPL.
*/
#include "dm-io.h"
#include <linux/mempool.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/bitops.h>
/* FIXME: can we shrink this ? */
struct io_context {
int rw;
unsigned int error;
atomic_t count;
struct task_struct *sleeper;
io_notify_fn callback;
void *context;
};
/*
* We maintain a pool of buffer heads for dispatching the io.
*/
static unsigned int _num_bhs;
static mempool_t *_buffer_pool;
/*
* io contexts are only dynamically allocated for asynchronous
* io. Since async io is likely to be the majority of io we'll
* have the same number of io contexts as buffer heads ! (FIXME:
* must reduce this).
*/
mempool_t *_io_pool;
static void *alloc_bh(int gfp_mask, void *pool_data)
{
struct buffer_head *bh;
bh = kmem_cache_alloc(bh_cachep, gfp_mask);
if (bh) {
bh->b_reqnext = NULL;
init_waitqueue_head(&bh->b_wait);
INIT_LIST_HEAD(&bh->b_inode_buffers);
}
return bh;
}
static void *alloc_io(int gfp_mask, void *pool_data)
{
return kmalloc(sizeof(struct io_context), gfp_mask);
}
static void free_io(void *element, void *pool_data)
{
kfree(element);
}
static unsigned int pages_to_buffers(unsigned int pages)
{
return 4 * pages; /* too many ? */
}
static int resize_pool(unsigned int new_bhs)
{
int r = 0;
if (_buffer_pool) {
if (new_bhs == 0) {
/* free off the pools */
mempool_destroy(_buffer_pool);
mempool_destroy(_io_pool);
_buffer_pool = _io_pool = NULL;
} else {
/* resize the pools */
r = mempool_resize(_buffer_pool, new_bhs, GFP_KERNEL);
if (!r)
r = mempool_resize(_io_pool,
new_bhs, GFP_KERNEL);
}
} else {
/* create new pools */
_buffer_pool = mempool_create(new_bhs, alloc_bh,
mempool_free_slab, bh_cachep);
if (!_buffer_pool)
r = -ENOMEM;
_io_pool = mempool_create(new_bhs, alloc_io, free_io, NULL);
if (!_io_pool) {
mempool_destroy(_buffer_pool);
_buffer_pool = NULL;
r = -ENOMEM;
}
}
if (!r)
_num_bhs = new_bhs;
return r;
}
int dm_io_get(unsigned int num_pages)
{
return resize_pool(_num_bhs + pages_to_buffers(num_pages));
}
void dm_io_put(unsigned int num_pages)
{
resize_pool(_num_bhs - pages_to_buffers(num_pages));
}
/*-----------------------------------------------------------------
* We need to keep track of which region a buffer is doing io
* for. In order to save a memory allocation we store this in an
* unused field of the buffer head, and provide these access
* functions.
*
* FIXME: add compile time check that an unsigned int can fit
* into a pointer.
*
*---------------------------------------------------------------*/
static inline void bh_set_region(struct buffer_head *bh, unsigned int region)
{
bh->b_journal_head = (void *) region;
}
static inline int bh_get_region(struct buffer_head *bh)
{
return (unsigned int) bh->b_journal_head;
}
/*-----------------------------------------------------------------
* We need an io object to keep track of the number of bhs that
* have been dispatched for a particular io.
*---------------------------------------------------------------*/
static void dec_count(struct io_context *io, unsigned int region, int error)
{
if (error)
set_bit(region, &io->error);
if (atomic_dec_and_test(&io->count)) {
if (io->sleeper)
wake_up_process(io->sleeper);
else {
int r = io->error;
io_notify_fn fn = io->callback;
void *context = io->context;
mempool_free(io, _io_pool);
fn(r, context);
}
}
}
static void endio(struct buffer_head *bh, int uptodate)
{
struct io_context *io = (struct io_context *) bh->b_private;
if (!uptodate && io->rw != WRITE) {
/*
* We need to zero this region, otherwise people
* like kcopyd may write the arbitrary contents
* of the page.
*/
memset(bh->b_data, 0, bh->b_size);
}
dec_count((struct io_context *) bh->b_private,
bh_get_region(bh), !uptodate);
mempool_free(bh, _buffer_pool);
}
/*
* Primitives for alignment calculations.
*/
int fls(unsigned n)
{
return generic_fls32(n);
}
static inline int log2_floor(unsigned n)
{
return ffs(n) - 1;
}
static inline int log2_align(unsigned n)
{
return fls(n) - 1;
}
/*
* Returns the next block for io.
*/
static int do_page(kdev_t dev, sector_t *block, sector_t end_block,
unsigned int block_size,
struct page *p, unsigned int offset,
unsigned int region, struct io_context *io)
{
struct buffer_head *bh;
sector_t b = *block;
sector_t blocks_per_page = PAGE_SIZE / block_size;
unsigned int this_size; /* holds the size of the current io */
sector_t len;
if (!blocks_per_page) {
DMERR("dm-io: PAGE_SIZE (%lu) < block_size (%u) unsupported",
PAGE_SIZE, block_size);
return 0;
}
while ((offset < PAGE_SIZE) && (b != end_block)) {
bh = mempool_alloc(_buffer_pool, GFP_NOIO);
init_buffer(bh, endio, io);
bh_set_region(bh, region);
/*
* Block size must be a power of 2 and aligned
* correctly.
*/
len = min(end_block - b, blocks_per_page);
len = min(len, blocks_per_page - offset / block_size);
if (!len) {
DMERR("dm-io: Invalid offset/block_size (%u/%u).",
offset, block_size);
return 0;
}
this_size = 1 << log2_align(len);
if (b)
this_size = min(this_size,
(unsigned) 1 << log2_floor(b));
/*
* Add in the job offset.
*/
bh->b_blocknr = (b / this_size);
bh->b_size = block_size * this_size;
set_bh_page(bh, p, offset);
bh->b_this_page = bh;
bh->b_dev = dev;
atomic_set(&bh->b_count, 1);
bh->b_state = ((1 << BH_Uptodate) | (1 << BH_Mapped) |
(1 << BH_Lock));
if (io->rw == WRITE)
clear_bit(BH_Dirty, &bh->b_state);
atomic_inc(&io->count);
submit_bh(io->rw, bh);
b += this_size;
offset += block_size * this_size;
}
*block = b;
return (b == end_block);
}
static void do_region(unsigned int region, struct io_region *where,
struct page *page, unsigned int offset,
struct io_context *io)
{
unsigned int block_size = get_hardsect_size(where->dev);
unsigned int sblock_size = block_size >> 9;
sector_t block = where->sector / sblock_size;
sector_t end_block = (where->sector + where->count) / sblock_size;
while (1) {
if (do_page(where->dev, &block, end_block, block_size,
page, offset, region, io))
break;
offset = 0; /* only offset the first page */
page = list_entry(page->list.next, struct page, list);
}
}
static void dispatch_io(unsigned int num_regions, struct io_region *where,
struct page *pages, unsigned int offset,
struct io_context *io)
{
int i;
for (i = 0; i < num_regions; i++)
if (where[i].count)
do_region(i, where + i, pages, offset, io);
/*
* Drop the extra refence that we were holding to avoid
* the io being completed too early.
*/
dec_count(io, 0, 0);
}
/*
* Synchronous io
*/
int dm_io_sync(unsigned int num_regions, struct io_region *where,
int rw, struct page *pages, unsigned int offset,
unsigned int *error_bits)
{
struct io_context io;
BUG_ON(num_regions > 1 && rw != WRITE);
io.rw = rw;
io.error = 0;
atomic_set(&io.count, 1); /* see dispatch_io() */
io.sleeper = current;
dispatch_io(num_regions, where, pages, offset, &io);
run_task_queue(&tq_disk);
while (1) {
set_current_state(TASK_UNINTERRUPTIBLE);
if (!atomic_read(&io.count))
break;
schedule();
}
set_current_state(TASK_RUNNING);
*error_bits = io.error;
return io.error ? -EIO : 0;
}
/*
* Asynchronous io
*/
int dm_io_async(unsigned int num_regions, struct io_region *where, int rw,
struct page *pages, unsigned int offset,
io_notify_fn fn, void *context)
{
struct io_context *io = mempool_alloc(_io_pool, GFP_NOIO);
io->rw = rw;
io->error = 0;
atomic_set(&io->count, 1); /* see dispatch_io() */
io->sleeper = NULL;
io->callback = fn;
io->context = context;
dispatch_io(num_regions, where, pages, offset, io);
return 0;
}
EXPORT_SYMBOL(dm_io_get);
EXPORT_SYMBOL(dm_io_put);
EXPORT_SYMBOL(dm_io_sync);
EXPORT_SYMBOL(dm_io_async);

View File

@ -0,0 +1,86 @@
/*
* Copyright (C) 2003 Sistina Software
*
* This file is released under the GPL.
*/
#ifndef _DM_IO_H
#define _DM_IO_H
#include "dm.h"
#include <linux/list.h>
/* Move these to bitops.h eventually */
/* Improved generic_fls algorithm (in 2.4 there is no generic_fls so far) */
/* (c) 2002, D.Phillips and Sistina Software */
/* Licensed under Version 2 of the GPL */
static unsigned generic_fls8(unsigned n)
{
return n & 0xf0 ?
n & 0xc0 ? (n >> 7) + 7 : (n >> 5) + 5:
n & 0x0c ? (n >> 3) + 3 : n - ((n + 1) >> 2);
}
static inline unsigned generic_fls16(unsigned n)
{
return n & 0xff00? generic_fls8(n >> 8) + 8 : generic_fls8(n);
}
static inline unsigned generic_fls32(unsigned n)
{
return n & 0xffff0000 ? generic_fls16(n >> 16) + 16 : generic_fls16(n);
}
/* FIXME make this configurable */
#define DM_MAX_IO_REGIONS 8
struct io_region {
kdev_t dev;
sector_t sector;
sector_t count;
};
/*
* 'error' is a bitset, with each bit indicating whether an error
* occurred doing io to the corresponding region.
*/
typedef void (*io_notify_fn)(unsigned int error, void *context);
/*
* Before anyone uses the IO interface they should call
* dm_io_get(), specifying roughly how many pages they are
* expecting to perform io on concurrently.
*
* This function may block.
*/
int dm_io_get(unsigned int num_pages);
void dm_io_put(unsigned int num_pages);
/*
* Synchronous IO.
*
* Please ensure that the rw flag in the next two functions is
* either READ or WRITE, ie. we don't take READA. Any
* regions with a zero count field will be ignored.
*/
int dm_io_sync(unsigned int num_regions, struct io_region *where, int rw,
struct page *pages, unsigned int offset,
unsigned int *error_bits);
/*
* Aynchronous IO.
*
* The 'where' array may be safely allocated on the stack since
* the function takes a copy.
*/
int dm_io_async(unsigned int num_regions, struct io_region *where, int rw,
struct page *pages, unsigned int offset,
io_notify_fn fn, void *context);
#endif

View File

@ -0,0 +1,124 @@
/*
* Copyright (C) 2001-2003 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include "dm.h"
#include <linux/module.h>
#include <linux/init.h>
#include <linux/blkdev.h>
#include <linux/slab.h>
/*
* Linear: maps a linear range of a device.
*/
struct linear_c {
struct dm_dev *dev;
sector_t start;
};
/*
* Construct a linear mapping: <dev_path> <offset>
*/
static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
{
struct linear_c *lc;
if (argc != 2) {
ti->error = "dm-linear: Invalid argument count";
return -EINVAL;
}
lc = kmalloc(sizeof(*lc), GFP_KERNEL);
if (lc == NULL) {
ti->error = "dm-linear: Cannot allocate linear context";
return -ENOMEM;
}
if (sscanf(argv[1], SECTOR_FORMAT, &lc->start) != 1) {
ti->error = "dm-linear: Invalid device sector";
goto bad;
}
if (dm_get_device(ti, argv[0], lc->start, ti->len,
dm_table_get_mode(ti->table), &lc->dev)) {
ti->error = "dm-linear: Device lookup failed";
goto bad;
}
ti->private = lc;
return 0;
bad:
kfree(lc);
return -EINVAL;
}
static void linear_dtr(struct dm_target *ti)
{
struct linear_c *lc = (struct linear_c *) ti->private;
dm_put_device(ti, lc->dev);
kfree(lc);
}
static int linear_map(struct dm_target *ti, struct buffer_head *bh, int rw,
union map_info *map_context)
{
struct linear_c *lc = (struct linear_c *) ti->private;
bh->b_rdev = lc->dev->dev;
bh->b_rsector = lc->start + (bh->b_rsector - ti->begin);
return 1;
}
static int linear_status(struct dm_target *ti, status_type_t type,
char *result, unsigned int maxlen)
{
struct linear_c *lc = (struct linear_c *) ti->private;
kdev_t kdev;
switch (type) {
case STATUSTYPE_INFO:
result[0] = '\0';
break;
case STATUSTYPE_TABLE:
kdev = to_kdev_t(lc->dev->bdev->bd_dev);
snprintf(result, maxlen, "%s " SECTOR_FORMAT,
dm_kdevname(kdev), lc->start);
break;
}
return 0;
}
static struct target_type linear_target = {
.name = "linear",
.version= {1, 0, 1},
.module = THIS_MODULE,
.ctr = linear_ctr,
.dtr = linear_dtr,
.map = linear_map,
.status = linear_status,
};
int __init dm_linear_init(void)
{
int r = dm_register_target(&linear_target);
if (r < 0)
DMERR("linear: register failed %d", r);
return r;
}
void dm_linear_exit(void)
{
int r = dm_unregister_target(&linear_target);
if (r < 0)
DMERR("linear: unregister failed %d", r);
}

View File

@ -0,0 +1,310 @@
/*
* Copyright (C) 2003 Sistina Software
*
* This file is released under the LGPL.
*/
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/vmalloc.h>
#include "dm-log.h"
#include "dm-io.h"
static LIST_HEAD(_log_types);
static spinlock_t _lock = SPIN_LOCK_UNLOCKED;
int dm_register_dirty_log_type(struct dirty_log_type *type)
{
spin_lock(&_lock);
type->use_count = 0;
if (type->module)
__MOD_INC_USE_COUNT(type->module);
list_add(&type->list, &_log_types);
spin_unlock(&_lock);
return 0;
}
int dm_unregister_dirty_log_type(struct dirty_log_type *type)
{
spin_lock(&_lock);
if (type->use_count)
DMWARN("Attempt to unregister a log type that is still in use");
else {
list_del(&type->list);
if (type->module)
__MOD_DEC_USE_COUNT(type->module);
}
spin_unlock(&_lock);
return 0;
}
static struct dirty_log_type *get_type(const char *type_name)
{
struct dirty_log_type *type;
struct list_head *tmp;
spin_lock(&_lock);
list_for_each (tmp, &_log_types) {
type = list_entry(tmp, struct dirty_log_type, list);
if (!strcmp(type_name, type->name)) {
type->use_count++;
spin_unlock(&_lock);
return type;
}
}
spin_unlock(&_lock);
return NULL;
}
static void put_type(struct dirty_log_type *type)
{
spin_lock(&_lock);
type->use_count--;
spin_unlock(&_lock);
}
struct dirty_log *dm_create_dirty_log(const char *type_name, sector_t dev_size,
unsigned int argc, char **argv)
{
struct dirty_log_type *type;
struct dirty_log *log;
log = kmalloc(sizeof(*log), GFP_KERNEL);
if (!log)
return NULL;
type = get_type(type_name);
if (!type) {
kfree(log);
return NULL;
}
log->type = type;
if (type->ctr(log, dev_size, argc, argv)) {
kfree(log);
put_type(type);
return NULL;
}
return log;
}
void dm_destroy_dirty_log(struct dirty_log *log)
{
log->type->dtr(log);
put_type(log->type);
kfree(log);
}
/*-----------------------------------------------------------------
* In core log, ie. trivial, non-persistent
*
* For now we'll keep this simple and just have 2 bitsets, one
* for clean/dirty, the other for sync/nosync. The sync bitset
* will be freed when everything is in sync.
*
* FIXME: problems with a 64bit sector_t
*---------------------------------------------------------------*/
struct core_log {
sector_t region_size;
unsigned int region_count;
unsigned long *clean_bits;
unsigned long *sync_bits;
unsigned long *recovering_bits; /* FIXME: this seems excessive */
int sync_search;
};
#define BYTE_SHIFT 3
static int core_ctr(struct dirty_log *log, sector_t dev_size,
unsigned int argc, char **argv)
{
struct core_log *clog;
sector_t region_size;
unsigned int region_count;
size_t bitset_size;
if (argc != 1) {
DMWARN("wrong number of arguments to core_log");
return -EINVAL;
}
if (sscanf(argv[0], SECTOR_FORMAT, &region_size) != 1) {
DMWARN("invalid region size string");
return -EINVAL;
}
region_count = dm_div_up(dev_size, region_size);
clog = kmalloc(sizeof(*clog), GFP_KERNEL);
if (!clog) {
DMWARN("couldn't allocate core log");
return -ENOMEM;
}
clog->region_size = region_size;
clog->region_count = region_count;
/*
* Work out how many words we need to hold the bitset.
*/
bitset_size = dm_round_up(region_count,
sizeof(*clog->clean_bits) << BYTE_SHIFT);
bitset_size >>= BYTE_SHIFT;
clog->clean_bits = vmalloc(bitset_size);
if (!clog->clean_bits) {
DMWARN("couldn't allocate clean bitset");
kfree(clog);
return -ENOMEM;
}
memset(clog->clean_bits, -1, bitset_size);
clog->sync_bits = vmalloc(bitset_size);
if (!clog->sync_bits) {
DMWARN("couldn't allocate sync bitset");
vfree(clog->clean_bits);
kfree(clog);
return -ENOMEM;
}
memset(clog->sync_bits, 0, bitset_size);
clog->recovering_bits = vmalloc(bitset_size);
if (!clog->recovering_bits) {
DMWARN("couldn't allocate sync bitset");
vfree(clog->sync_bits);
vfree(clog->clean_bits);
kfree(clog);
return -ENOMEM;
}
memset(clog->recovering_bits, 0, bitset_size);
clog->sync_search = 0;
log->context = clog;
return 0;
}
static void core_dtr(struct dirty_log *log)
{
struct core_log *clog = (struct core_log *) log->context;
vfree(clog->clean_bits);
vfree(clog->sync_bits);
vfree(clog->recovering_bits);
kfree(clog);
}
static sector_t core_get_region_size(struct dirty_log *log)
{
struct core_log *clog = (struct core_log *) log->context;
return clog->region_size;
}
static int core_is_clean(struct dirty_log *log, region_t region)
{
struct core_log *clog = (struct core_log *) log->context;
return test_bit(region, clog->clean_bits);
}
static int core_in_sync(struct dirty_log *log, region_t region, int block)
{
struct core_log *clog = (struct core_log *) log->context;
return test_bit(region, clog->sync_bits) ? 1 : 0;
}
static int core_flush(struct dirty_log *log)
{
/* no op */
return 0;
}
static void core_mark_region(struct dirty_log *log, region_t region)
{
struct core_log *clog = (struct core_log *) log->context;
clear_bit(region, clog->clean_bits);
}
static void core_clear_region(struct dirty_log *log, region_t region)
{
struct core_log *clog = (struct core_log *) log->context;
set_bit(region, clog->clean_bits);
}
static int core_get_resync_work(struct dirty_log *log, region_t *region)
{
struct core_log *clog = (struct core_log *) log->context;
if (clog->sync_search >= clog->region_count)
return 0;
do {
*region = find_next_zero_bit(clog->sync_bits,
clog->region_count,
clog->sync_search);
clog->sync_search = *region + 1;
if (*region == clog->region_count)
return 0;
} while (test_bit(*region, clog->recovering_bits));
set_bit(*region, clog->recovering_bits);
return 1;
}
static void core_complete_resync_work(struct dirty_log *log, region_t region,
int success)
{
struct core_log *clog = (struct core_log *) log->context;
clear_bit(region, clog->recovering_bits);
if (success)
set_bit(region, clog->sync_bits);
}
static struct dirty_log_type _core_type = {
.name = "core",
.ctr = core_ctr,
.dtr = core_dtr,
.get_region_size = core_get_region_size,
.is_clean = core_is_clean,
.in_sync = core_in_sync,
.flush = core_flush,
.mark_region = core_mark_region,
.clear_region = core_clear_region,
.get_resync_work = core_get_resync_work,
.complete_resync_work = core_complete_resync_work
};
__init int dm_dirty_log_init(void)
{
int r;
r = dm_register_dirty_log_type(&_core_type);
if (r)
DMWARN("couldn't register core log");
return r;
}
void dm_dirty_log_exit(void)
{
dm_unregister_dirty_log_type(&_core_type);
}
EXPORT_SYMBOL(dm_register_dirty_log_type);
EXPORT_SYMBOL(dm_unregister_dirty_log_type);
EXPORT_SYMBOL(dm_dirty_log_init);
EXPORT_SYMBOL(dm_dirty_log_exit);
EXPORT_SYMBOL(dm_create_dirty_log);
EXPORT_SYMBOL(dm_destroy_dirty_log);

View File

@ -0,0 +1,112 @@
/*
* Copyright (C) 2003 Sistina Software
*
* This file is released under the LGPL.
*/
#ifndef DM_DIRTY_LOG
#define DM_DIRTY_LOG
#include "dm.h"
typedef sector_t region_t;
struct dirty_log_type;
struct dirty_log {
struct dirty_log_type *type;
void *context;
};
struct dirty_log_type {
struct list_head list;
const char *name;
struct module *module;
unsigned int use_count;
int (*ctr)(struct dirty_log *log, sector_t dev_size,
unsigned int argc, char **argv);
void (*dtr)(struct dirty_log *log);
/*
* Retrieves the smallest size of region that the log can
* deal with.
*/
sector_t (*get_region_size)(struct dirty_log *log);
/*
* A predicate to say whether a region is clean or not.
* May block.
*/
int (*is_clean)(struct dirty_log *log, region_t region);
/*
* Returns: 0, 1, -EWOULDBLOCK, < 0
*
* A predicate function to check the area given by
* [sector, sector + len) is in sync.
*
* If -EWOULDBLOCK is returned the state of the region is
* unknown, typically this will result in a read being
* passed to a daemon to deal with, since a daemon is
* allowed to block.
*/
int (*in_sync)(struct dirty_log *log, region_t region, int can_block);
/*
* Flush the current log state (eg, to disk). This
* function may block.
*/
int (*flush)(struct dirty_log *log);
/*
* Mark an area as clean or dirty. These functions may
* block, though for performance reasons blocking should
* be extremely rare (eg, allocating another chunk of
* memory for some reason).
*/
void (*mark_region)(struct dirty_log *log, region_t region);
void (*clear_region)(struct dirty_log *log, region_t region);
/*
* Returns: <0 (error), 0 (no region), 1 (region)
*
* The mirrord will need perform recovery on regions of
* the mirror that are in the NOSYNC state. This
* function asks the log to tell the caller about the
* next region that this machine should recover.
*
* Do not confuse this function with 'in_sync()', one
* tells you if an area is synchronised, the other
* assigns recovery work.
*/
int (*get_resync_work)(struct dirty_log *log, region_t *region);
/*
* This notifies the log that the resync of an area has
* been completed. The log should then mark this region
* as CLEAN.
*/
void (*complete_resync_work)(struct dirty_log *log,
region_t region, int success);
};
int dm_register_dirty_log_type(struct dirty_log_type *type);
int dm_unregister_dirty_log_type(struct dirty_log_type *type);
/*
* Make sure you use these two functions, rather than calling
* type->constructor/destructor() directly.
*/
struct dirty_log *dm_create_dirty_log(const char *type_name, sector_t dev_size,
unsigned int argc, char **argv);
void dm_destroy_dirty_log(struct dirty_log *log);
/*
* init/exit functions.
*/
int dm_dirty_log_init(void);
void dm_dirty_log_exit(void);
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,158 @@
/*
* dm-snapshot.c
*
* Copyright (C) 2001-2002 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#ifndef DM_SNAPSHOT_H
#define DM_SNAPSHOT_H
#include "dm.h"
#include <linux/blkdev.h>
struct exception_table {
uint32_t hash_mask;
struct list_head *table;
};
/*
* The snapshot code deals with largish chunks of the disk at a
* time. Typically 64k - 256k.
*/
/* FIXME: can we get away with limiting these to a uint32_t ? */
typedef sector_t chunk_t;
/*
* An exception is used where an old chunk of data has been
* replaced by a new one.
*/
struct exception {
struct list_head hash_list;
chunk_t old_chunk;
chunk_t new_chunk;
};
/*
* Abstraction to handle the meta/layout of exception stores (the
* COW device).
*/
struct exception_store {
/*
* Destroys this object when you've finished with it.
*/
void (*destroy) (struct exception_store *store);
/*
* The target shouldn't read the COW device until this is
* called.
*/
int (*read_metadata) (struct exception_store *store);
/*
* Find somewhere to store the next exception.
*/
int (*prepare_exception) (struct exception_store *store,
struct exception *e);
/*
* Update the metadata with this exception.
*/
void (*commit_exception) (struct exception_store *store,
struct exception *e,
void (*callback) (void *, int success),
void *callback_context);
/*
* The snapshot is invalid, note this in the metadata.
*/
void (*drop_snapshot) (struct exception_store *store);
/*
* Return how full the snapshot is.
*/
void (*fraction_full) (struct exception_store *store,
sector_t *numerator,
sector_t *denominator);
struct dm_snapshot *snap;
void *context;
};
struct dm_snapshot {
struct rw_semaphore lock;
struct dm_table *table;
struct dm_dev *origin;
struct dm_dev *cow;
/* List of snapshots per Origin */
struct list_head list;
/* Size of data blocks saved - must be a power of 2 */
chunk_t chunk_size;
chunk_t chunk_mask;
chunk_t chunk_shift;
/* You can't use a snapshot if this is 0 (e.g. if full) */
int valid;
int have_metadata;
/* Used for display of table */
char type;
/* The last percentage we notified */
int last_percent;
struct exception_table pending;
struct exception_table complete;
/* The on disk metadata handler */
struct exception_store store;
struct kcopyd_client *kcopyd_client;
};
/*
* Used by the exception stores to load exceptions hen
* initialising.
*/
int dm_add_exception(struct dm_snapshot *s, chunk_t old, chunk_t new);
/*
* Constructor and destructor for the default persistent
* store.
*/
int dm_create_persistent(struct exception_store *store, uint32_t chunk_size);
int dm_create_transient(struct exception_store *store,
struct dm_snapshot *s, int blocksize);
/*
* Return the number of sectors in the device.
*/
static inline sector_t get_dev_size(kdev_t dev)
{
int *sizes;
sizes = blk_size[MAJOR(dev)];
if (sizes)
return sizes[MINOR(dev)] << 1;
return 0;
}
static inline chunk_t sector_to_chunk(struct dm_snapshot *s, sector_t sector)
{
return (sector & ~s->chunk_mask) >> s->chunk_shift;
}
static inline sector_t chunk_to_sector(struct dm_snapshot *s, chunk_t chunk)
{
return chunk << s->chunk_shift;
}
#endif

View File

@ -0,0 +1,259 @@
/*
* Copyright (C) 2001-2003 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include "dm.h"
#include <linux/module.h>
#include <linux/init.h>
#include <linux/blkdev.h>
#include <linux/slab.h>
struct stripe {
struct dm_dev *dev;
sector_t physical_start;
};
struct stripe_c {
uint32_t stripes;
/* The size of this target / num. stripes */
uint32_t stripe_width;
/* stripe chunk size */
uint32_t chunk_shift;
sector_t chunk_mask;
struct stripe stripe[0];
};
static inline struct stripe_c *alloc_context(unsigned int stripes)
{
size_t len;
if (array_too_big(sizeof(struct stripe_c), sizeof(struct stripe),
stripes))
return NULL;
len = sizeof(struct stripe_c) + (sizeof(struct stripe) * stripes);
return kmalloc(len, GFP_KERNEL);
}
/*
* Parse a single <dev> <sector> pair
*/
static int get_stripe(struct dm_target *ti, struct stripe_c *sc,
unsigned int stripe, char **argv)
{
sector_t start;
if (sscanf(argv[1], SECTOR_FORMAT, &start) != 1)
return -EINVAL;
if (dm_get_device(ti, argv[0], start, sc->stripe_width,
dm_table_get_mode(ti->table),
&sc->stripe[stripe].dev))
return -ENXIO;
sc->stripe[stripe].physical_start = start;
return 0;
}
/*
* FIXME: Nasty function, only present because we can't link
* against __moddi3 and __divdi3.
*
* returns a == b * n
*/
static int multiple(sector_t a, sector_t b, sector_t *n)
{
sector_t acc, prev, i;
*n = 0;
while (a >= b) {
for (acc = b, prev = 0, i = 1;
acc <= a;
prev = acc, acc <<= 1, i <<= 1)
;
a -= prev;
*n += i >> 1;
}
return a == 0;
}
/*
* Construct a striped mapping.
* <number of stripes> <chunk size (2^^n)> [<dev_path> <offset>]+
*/
static int stripe_ctr(struct dm_target *ti, unsigned int argc, char **argv)
{
struct stripe_c *sc;
sector_t width;
uint32_t stripes;
uint32_t chunk_size;
char *end;
int r;
unsigned int i;
if (argc < 2) {
ti->error = "dm-stripe: Not enough arguments";
return -EINVAL;
}
stripes = simple_strtoul(argv[0], &end, 10);
if (*end) {
ti->error = "dm-stripe: Invalid stripe count";
return -EINVAL;
}
chunk_size = simple_strtoul(argv[1], &end, 10);
if (*end) {
ti->error = "dm-stripe: Invalid chunk_size";
return -EINVAL;
}
/*
* chunk_size is a power of two
*/
if (!chunk_size || (chunk_size & (chunk_size - 1))) {
ti->error = "dm-stripe: Invalid chunk size";
return -EINVAL;
}
if (!multiple(ti->len, stripes, &width)) {
ti->error = "dm-stripe: Target length not divisable by "
"number of stripes";
return -EINVAL;
}
/*
* Do we have enough arguments for that many stripes ?
*/
if (argc != (2 + 2 * stripes)) {
ti->error = "dm-stripe: Not enough destinations specified";
return -EINVAL;
}
sc = alloc_context(stripes);
if (!sc) {
ti->error = "dm-stripe: Memory allocation for striped context "
"failed";
return -ENOMEM;
}
sc->stripes = stripes;
sc->stripe_width = width;
sc->chunk_mask = ((sector_t) chunk_size) - 1;
for (sc->chunk_shift = 0; chunk_size; sc->chunk_shift++)
chunk_size >>= 1;
sc->chunk_shift--;
/*
* Get the stripe destinations.
*/
for (i = 0; i < stripes; i++) {
argv += 2;
r = get_stripe(ti, sc, i, argv);
if (r < 0) {
ti->error = "dm-stripe: Couldn't parse stripe "
"destination";
while (i--)
dm_put_device(ti, sc->stripe[i].dev);
kfree(sc);
return r;
}
}
ti->private = sc;
return 0;
}
static void stripe_dtr(struct dm_target *ti)
{
unsigned int i;
struct stripe_c *sc = (struct stripe_c *) ti->private;
for (i = 0; i < sc->stripes; i++)
dm_put_device(ti, sc->stripe[i].dev);
kfree(sc);
}
static int stripe_map(struct dm_target *ti, struct buffer_head *bh, int rw,
union map_info *context)
{
struct stripe_c *sc = (struct stripe_c *) ti->private;
sector_t offset = bh->b_rsector - ti->begin;
uint32_t chunk = (uint32_t) (offset >> sc->chunk_shift);
uint32_t stripe = chunk % sc->stripes; /* 32bit modulus */
chunk = chunk / sc->stripes;
bh->b_rdev = sc->stripe[stripe].dev->dev;
bh->b_rsector = sc->stripe[stripe].physical_start +
(chunk << sc->chunk_shift) + (offset & sc->chunk_mask);
return 1;
}
static int stripe_status(struct dm_target *ti, status_type_t type,
char *result, unsigned int maxlen)
{
struct stripe_c *sc = (struct stripe_c *) ti->private;
int offset;
unsigned int i;
switch (type) {
case STATUSTYPE_INFO:
result[0] = '\0';
break;
case STATUSTYPE_TABLE:
offset = snprintf(result, maxlen, "%d " SECTOR_FORMAT,
sc->stripes, sc->chunk_mask + 1);
for (i = 0; i < sc->stripes; i++) {
offset +=
snprintf(result + offset, maxlen - offset,
" %s " SECTOR_FORMAT,
dm_kdevname(to_kdev_t(sc->stripe[i].dev->bdev->bd_dev)),
sc->stripe[i].physical_start);
}
break;
}
return 0;
}
static struct target_type stripe_target = {
.name = "striped",
.version= {1, 0, 1},
.module = THIS_MODULE,
.ctr = stripe_ctr,
.dtr = stripe_dtr,
.map = stripe_map,
.status = stripe_status,
};
int __init dm_stripe_init(void)
{
int r;
r = dm_register_target(&stripe_target);
if (r < 0)
DMWARN("striped target registration failed");
return r;
}
void dm_stripe_exit(void)
{
if (dm_unregister_target(&stripe_target))
DMWARN("striped target unregistration failed");
return;
}

View File

@ -0,0 +1,679 @@
/*
* Copyright (C) 2001 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include "dm.h"
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/blkdev.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <asm/atomic.h>
#define MAX_DEPTH 16
#define NODE_SIZE L1_CACHE_BYTES
#define KEYS_PER_NODE (NODE_SIZE / sizeof(sector_t))
#define CHILDREN_PER_NODE (KEYS_PER_NODE + 1)
struct dm_table {
atomic_t holders;
/* btree table */
unsigned int depth;
unsigned int counts[MAX_DEPTH]; /* in nodes */
sector_t *index[MAX_DEPTH];
unsigned int num_targets;
unsigned int num_allocated;
sector_t *highs;
struct dm_target *targets;
/*
* Indicates the rw permissions for the new logical
* device. This should be a combination of FMODE_READ
* and FMODE_WRITE.
*/
int mode;
/* a list of devices used by this table */
struct list_head devices;
/* events get handed up using this callback */
void (*event_fn)(void *);
void *event_context;
};
/*
* Similar to ceiling(log_size(n))
*/
static unsigned int int_log(unsigned long n, unsigned long base)
{
int result = 0;
while (n > 1) {
n = dm_div_up(n, base);
result++;
}
return result;
}
/*
* Calculate the index of the child node of the n'th node k'th key.
*/
static inline unsigned int get_child(unsigned int n, unsigned int k)
{
return (n * CHILDREN_PER_NODE) + k;
}
/*
* Return the n'th node of level l from table t.
*/
static inline sector_t *get_node(struct dm_table *t, unsigned int l,
unsigned int n)
{
return t->index[l] + (n * KEYS_PER_NODE);
}
/*
* Return the highest key that you could lookup from the n'th
* node on level l of the btree.
*/
static sector_t high(struct dm_table *t, unsigned int l, unsigned int n)
{
for (; l < t->depth - 1; l++)
n = get_child(n, CHILDREN_PER_NODE - 1);
if (n >= t->counts[l])
return (sector_t) - 1;
return get_node(t, l, n)[KEYS_PER_NODE - 1];
}
/*
* Fills in a level of the btree based on the highs of the level
* below it.
*/
static int setup_btree_index(unsigned int l, struct dm_table *t)
{
unsigned int n, k;
sector_t *node;
for (n = 0U; n < t->counts[l]; n++) {
node = get_node(t, l, n);
for (k = 0U; k < KEYS_PER_NODE; k++)
node[k] = high(t, l + 1, get_child(n, k));
}
return 0;
}
int dm_table_create(struct dm_table **result, int mode, unsigned num_targets)
{
struct dm_table *t = kmalloc(sizeof(*t), GFP_KERNEL);
if (!t)
return -ENOMEM;
memset(t, 0, sizeof(*t));
INIT_LIST_HEAD(&t->devices);
atomic_set(&t->holders, 1);
num_targets = dm_round_up(num_targets, KEYS_PER_NODE);
/* Allocate both the target array and offset array at once. */
t->highs = (sector_t *) vcalloc(sizeof(struct dm_target) +
sizeof(sector_t), num_targets);
if (!t->highs) {
kfree(t);
return -ENOMEM;
}
memset(t->highs, -1, sizeof(*t->highs) * num_targets);
t->targets = (struct dm_target *) (t->highs + num_targets);
t->num_allocated = num_targets;
t->mode = mode;
*result = t;
return 0;
}
static void free_devices(struct list_head *devices)
{
struct list_head *tmp, *next;
for (tmp = devices->next; tmp != devices; tmp = next) {
struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
next = tmp->next;
kfree(dd);
}
}
void table_destroy(struct dm_table *t)
{
unsigned int i;
/* free the indexes (see dm_table_complete) */
if (t->depth >= 2)
vfree(t->index[t->depth - 2]);
/* free the targets */
for (i = 0; i < t->num_targets; i++) {
struct dm_target *tgt = t->targets + i;
if (tgt->type->dtr)
tgt->type->dtr(tgt);
dm_put_target_type(tgt->type);
}
vfree(t->highs);
/* free the device list */
if (t->devices.next != &t->devices) {
DMWARN("devices still present during destroy: "
"dm_table_remove_device calls missing");
free_devices(&t->devices);
}
kfree(t);
}
void dm_table_get(struct dm_table *t)
{
atomic_inc(&t->holders);
}
void dm_table_put(struct dm_table *t)
{
if (atomic_dec_and_test(&t->holders))
table_destroy(t);
}
/*
* Convert a device path to a dev_t.
*/
static int lookup_device(const char *path, kdev_t *dev)
{
int r;
struct nameidata nd;
struct inode *inode;
if (!path_init(path, LOOKUP_FOLLOW, &nd))
return 0;
if ((r = path_walk(path, &nd)))
goto out;
inode = nd.dentry->d_inode;
if (!inode) {
r = -ENOENT;
goto out;
}
if (!S_ISBLK(inode->i_mode)) {
r = -ENOTBLK;
goto out;
}
*dev = inode->i_rdev;
out:
path_release(&nd);
return r;
}
/*
* See if we've already got a device in the list.
*/
static struct dm_dev *find_device(struct list_head *l, kdev_t dev)
{
struct list_head *tmp;
list_for_each(tmp, l) {
struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
if (kdev_same(dd->dev, dev))
return dd;
}
return NULL;
}
/*
* Open a device so we can use it as a map destination.
*/
static int open_dev(struct dm_dev *dd)
{
if (dd->bdev)
BUG();
dd->bdev = bdget(kdev_t_to_nr(dd->dev));
if (!dd->bdev)
return -ENOMEM;
return blkdev_get(dd->bdev, dd->mode, 0, BDEV_RAW);
}
/*
* Close a device that we've been using.
*/
static void close_dev(struct dm_dev *dd)
{
if (!dd->bdev)
return;
blkdev_put(dd->bdev, BDEV_RAW);
dd->bdev = NULL;
}
/*
* If possible (ie. blk_size[major] is set), this checks an area
* of a destination device is valid.
*/
static int check_device_area(kdev_t dev, sector_t start, sector_t len)
{
int *sizes;
sector_t dev_size;
if (!(sizes = blk_size[major(dev)]) || !(dev_size = sizes[minor(dev)]))
/* we don't know the device details,
* so give the benefit of the doubt */
return 1;
/* convert to 512-byte sectors */
dev_size <<= 1;
return ((start < dev_size) && (len <= (dev_size - start)));
}
/*
* This upgrades the mode on an already open dm_dev. Being
* careful to leave things as they were if we fail to reopen the
* device.
*/
static int upgrade_mode(struct dm_dev *dd, int new_mode)
{
int r;
struct dm_dev dd_copy;
memcpy(&dd_copy, dd, sizeof(dd_copy));
dd->mode |= new_mode;
dd->bdev = NULL;
r = open_dev(dd);
if (!r)
close_dev(&dd_copy);
else
memcpy(dd, &dd_copy, sizeof(dd_copy));
return r;
}
/*
* Add a device to the list, or just increment the usage count if
* it's already present.
*/
int dm_get_device(struct dm_target *ti, const char *path, sector_t start,
sector_t len, int mode, struct dm_dev **result)
{
int r;
kdev_t dev;
struct dm_dev *dd;
unsigned major, minor;
struct dm_table *t = ti->table;
if (!t)
BUG();
if (sscanf(path, "%u:%u", &major, &minor) == 2) {
/* Extract the major/minor numbers */
dev = mk_kdev(major, minor);
} else {
/* convert the path to a device */
if ((r = lookup_device(path, &dev)))
return r;
}
dd = find_device(&t->devices, dev);
if (!dd) {
dd = kmalloc(sizeof(*dd), GFP_KERNEL);
if (!dd)
return -ENOMEM;
dd->dev = dev;
dd->mode = mode;
dd->bdev = NULL;
if ((r = open_dev(dd))) {
kfree(dd);
return r;
}
atomic_set(&dd->count, 0);
list_add(&dd->list, &t->devices);
} else if (dd->mode != (mode | dd->mode)) {
r = upgrade_mode(dd, mode);
if (r)
return r;
}
atomic_inc(&dd->count);
if (!check_device_area(dd->dev, start, len)) {
DMWARN("device %s too small for target", path);
dm_put_device(ti, dd);
return -EINVAL;
}
*result = dd;
return 0;
}
/*
* Decrement a devices use count and remove it if neccessary.
*/
void dm_put_device(struct dm_target *ti, struct dm_dev *dd)
{
if (atomic_dec_and_test(&dd->count)) {
close_dev(dd);
list_del(&dd->list);
kfree(dd);
}
}
/*
* Checks to see if the target joins onto the end of the table.
*/
static int adjoin(struct dm_table *table, struct dm_target *ti)
{
struct dm_target *prev;
if (!table->num_targets)
return !ti->begin;
prev = &table->targets[table->num_targets - 1];
return (ti->begin == (prev->begin + prev->len));
}
/*
* Used to dynamically allocate the arg array.
*/
static char **realloc_argv(unsigned *array_size, char **old_argv)
{
char **argv;
unsigned new_size;
new_size = *array_size ? *array_size * 2 : 64;
argv = kmalloc(new_size * sizeof(*argv), GFP_KERNEL);
if (argv) {
memcpy(argv, old_argv, *array_size * sizeof(*argv));
*array_size = new_size;
}
kfree(old_argv);
return argv;
}
/*
* Destructively splits up the argument list to pass to ctr.
*/
static int split_args(int *argc, char ***argvp, char *input)
{
char *start, *end = input, *out, **argv = NULL;
unsigned array_size = 0;
*argc = 0;
argv = realloc_argv(&array_size, argv);
if (!argv)
return -ENOMEM;
while (1) {
start = end;
/* Skip whitespace */
while (*start && isspace(*start))
start++;
if (!*start)
break; /* success, we hit the end */
/* 'out' is used to remove any back-quotes */
end = out = start;
while (*end) {
/* Everything apart from '\0' can be quoted */
if (*end == '\\' && *(end + 1)) {
*out++ = *(end + 1);
end += 2;
continue;
}
if (isspace(*end))
break; /* end of token */
*out++ = *end++;
}
/* have we already filled the array ? */
if ((*argc + 1) > array_size) {
argv = realloc_argv(&array_size, argv);
if (!argv)
return -ENOMEM;
}
/* we know this is whitespace */
if (*end)
end++;
/* terminate the string and put it in the array */
*out = '\0';
argv[*argc] = start;
(*argc)++;
}
*argvp = argv;
return 0;
}
int dm_table_add_target(struct dm_table *t, const char *type,
sector_t start, sector_t len, char *params)
{
int r = -EINVAL, argc;
char **argv;
struct dm_target *tgt;
if (t->num_targets >= t->num_allocated)
return -ENOMEM;
tgt = t->targets + t->num_targets;
memset(tgt, 0, sizeof(*tgt));
tgt->type = dm_get_target_type(type);
if (!tgt->type) {
tgt->error = "unknown target type";
return -EINVAL;
}
tgt->table = t;
tgt->begin = start;
tgt->len = len;
tgt->error = "Unknown error";
/*
* Does this target adjoin the previous one ?
*/
if (!adjoin(t, tgt)) {
tgt->error = "Gap in table";
r = -EINVAL;
goto bad;
}
r = split_args(&argc, &argv, params);
if (r) {
tgt->error = "couldn't split parameters (insufficient memory)";
goto bad;
}
r = tgt->type->ctr(tgt, argc, argv);
kfree(argv);
if (r)
goto bad;
t->highs[t->num_targets++] = tgt->begin + tgt->len - 1;
return 0;
bad:
printk(KERN_ERR DM_NAME ": %s\n", tgt->error);
dm_put_target_type(tgt->type);
return r;
}
static int setup_indexes(struct dm_table *t)
{
int i;
unsigned int total = 0;
sector_t *indexes;
/* allocate the space for *all* the indexes */
for (i = t->depth - 2; i >= 0; i--) {
t->counts[i] = dm_div_up(t->counts[i + 1], CHILDREN_PER_NODE);
total += t->counts[i];
}
indexes = (sector_t *) vcalloc(total, (unsigned long) NODE_SIZE);
if (!indexes)
return -ENOMEM;
/* set up internal nodes, bottom-up */
for (i = t->depth - 2, total = 0; i >= 0; i--) {
t->index[i] = indexes;
indexes += (KEYS_PER_NODE * t->counts[i]);
setup_btree_index(i, t);
}
return 0;
}
/*
* Builds the btree to index the map.
*/
int dm_table_complete(struct dm_table *t)
{
int r = 0;
unsigned int leaf_nodes;
/* how many indexes will the btree have ? */
leaf_nodes = dm_div_up(t->num_targets, KEYS_PER_NODE);
t->depth = 1 + int_log(leaf_nodes, CHILDREN_PER_NODE);
/* leaf layer has already been set up */
t->counts[t->depth - 1] = leaf_nodes;
t->index[t->depth - 1] = t->highs;
if (t->depth >= 2)
r = setup_indexes(t);
return r;
}
static spinlock_t _event_lock = SPIN_LOCK_UNLOCKED;
void dm_table_event_callback(struct dm_table *t,
void (*fn)(void *), void *context)
{
spin_lock_irq(&_event_lock);
t->event_fn = fn;
t->event_context = context;
spin_unlock_irq(&_event_lock);
}
void dm_table_event(struct dm_table *t)
{
spin_lock(&_event_lock);
if (t->event_fn)
t->event_fn(t->event_context);
spin_unlock(&_event_lock);
}
sector_t dm_table_get_size(struct dm_table *t)
{
return t->num_targets ? (t->highs[t->num_targets - 1] + 1) : 0;
}
struct dm_target *dm_table_get_target(struct dm_table *t, unsigned int index)
{
if (index > t->num_targets)
return NULL;
return t->targets + index;
}
/*
* Search the btree for the correct target.
*/
struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
{
unsigned int l, n = 0, k = 0;
sector_t *node;
for (l = 0; l < t->depth; l++) {
n = get_child(n, k);
node = get_node(t, l, n);
for (k = 0; k < KEYS_PER_NODE; k++)
if (node[k] >= sector)
break;
}
return &t->targets[(KEYS_PER_NODE * n) + k];
}
unsigned int dm_table_get_num_targets(struct dm_table *t)
{
return t->num_targets;
}
struct list_head *dm_table_get_devices(struct dm_table *t)
{
return &t->devices;
}
int dm_table_get_mode(struct dm_table *t)
{
return t->mode;
}
void dm_table_suspend_targets(struct dm_table *t)
{
int i;
for (i = 0; i < t->num_targets; i++) {
struct dm_target *ti = t->targets + i;
if (ti->type->suspend)
ti->type->suspend(ti);
}
}
void dm_table_resume_targets(struct dm_table *t)
{
int i;
for (i = 0; i < t->num_targets; i++) {
struct dm_target *ti = t->targets + i;
if (ti->type->resume)
ti->type->resume(ti);
}
}
EXPORT_SYMBOL(dm_get_device);
EXPORT_SYMBOL(dm_put_device);
EXPORT_SYMBOL(dm_table_event);
EXPORT_SYMBOL(dm_table_get_mode);

View File

@ -0,0 +1,203 @@
/*
* Copyright (C) 2001 Sistina Software (UK) Limited
*
* This file is released under the GPL.
*/
#include "dm.h"
#include <linux/module.h>
#include <linux/kmod.h>
#include <linux/slab.h>
struct tt_internal {
struct target_type tt;
struct list_head list;
long use;
};
static LIST_HEAD(_targets);
static DECLARE_RWSEM(_lock);
#define DM_MOD_NAME_SIZE 32
static inline struct tt_internal *__find_target_type(const char *name)
{
struct list_head *tih;
struct tt_internal *ti;
list_for_each(tih, &_targets) {
ti = list_entry(tih, struct tt_internal, list);
if (!strcmp(name, ti->tt.name))
return ti;
}
return NULL;
}
static struct tt_internal *get_target_type(const char *name)
{
struct tt_internal *ti;
down_read(&_lock);
ti = __find_target_type(name);
if (ti) {
if (ti->use == 0 && ti->tt.module)
__MOD_INC_USE_COUNT(ti->tt.module);
ti->use++;
}
up_read(&_lock);
return ti;
}
static void load_module(const char *name)
{
char module_name[DM_MOD_NAME_SIZE] = "dm-";
/* Length check for strcat() below */
if (strlen(name) > (DM_MOD_NAME_SIZE - 4))
return;
strcat(module_name, name);
request_module(module_name);
}
struct target_type *dm_get_target_type(const char *name)
{
struct tt_internal *ti = get_target_type(name);
if (!ti) {
load_module(name);
ti = get_target_type(name);
}
return ti ? &ti->tt : NULL;
}
void dm_put_target_type(struct target_type *t)
{
struct tt_internal *ti = (struct tt_internal *) t;
down_read(&_lock);
if (--ti->use == 0 && ti->tt.module)
__MOD_DEC_USE_COUNT(ti->tt.module);
if (ti->use < 0)
BUG();
up_read(&_lock);
return;
}
static struct tt_internal *alloc_target(struct target_type *t)
{
struct tt_internal *ti = kmalloc(sizeof(*ti), GFP_KERNEL);
if (ti) {
memset(ti, 0, sizeof(*ti));
ti->tt = *t;
}
return ti;
}
int dm_target_iterate(void (*iter_func)(struct target_type *tt,
void *param), void *param)
{
struct tt_internal *ti;
down_read(&_lock);
list_for_each_entry (ti, &_targets, list)
iter_func(&ti->tt, param);
up_read(&_lock);
return 0;
}
int dm_register_target(struct target_type *t)
{
int rv = 0;
struct tt_internal *ti = alloc_target(t);
if (!ti)
return -ENOMEM;
down_write(&_lock);
if (__find_target_type(t->name)) {
kfree(ti);
rv = -EEXIST;
} else
list_add(&ti->list, &_targets);
up_write(&_lock);
return rv;
}
int dm_unregister_target(struct target_type *t)
{
struct tt_internal *ti;
down_write(&_lock);
if (!(ti = __find_target_type(t->name))) {
up_write(&_lock);
return -EINVAL;
}
if (ti->use) {
up_write(&_lock);
return -ETXTBSY;
}
list_del(&ti->list);
kfree(ti);
up_write(&_lock);
return 0;
}
/*
* io-err: always fails an io, useful for bringing
* up LVs that have holes in them.
*/
static int io_err_ctr(struct dm_target *ti, unsigned int argc, char **args)
{
return 0;
}
static void io_err_dtr(struct dm_target *ti)
{
/* empty */
}
static int io_err_map(struct dm_target *ti, struct buffer_head *bh, int rw,
union map_info *map_context)
{
return -EIO;
}
static struct target_type error_target = {
.name = "error",
.version = {1, 0, 1},
.ctr = io_err_ctr,
.dtr = io_err_dtr,
.map = io_err_map,
};
int dm_target_init(void)
{
return dm_register_target(&error_target);
}
void dm_target_exit(void)
{
if (dm_unregister_target(&error_target))
DMWARN("error target unregistration failed");
}
EXPORT_SYMBOL(dm_register_target);
EXPORT_SYMBOL(dm_unregister_target);

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,177 @@
/*
* Internal header file for device mapper
*
* Copyright (C) 2001, 2002 Sistina Software
*
* This file is released under the LGPL.
*/
#ifndef DM_INTERNAL_H
#define DM_INTERNAL_H
#include <linux/fs.h>
#include <linux/device-mapper.h>
#include <linux/list.h>
#include <linux/blkdev.h>
#define DM_NAME "device-mapper"
#define DMWARN(f, x...) printk(KERN_WARNING DM_NAME ": " f "\n" , ## x)
#define DMERR(f, x...) printk(KERN_ERR DM_NAME ": " f "\n" , ## x)
#define DMINFO(f, x...) printk(KERN_INFO DM_NAME ": " f "\n" , ## x)
/*
* FIXME: I think this should be with the definition of sector_t
* in types.h.
*/
#ifdef CONFIG_LBD
#define SECTOR_FORMAT "%Lu"
#else
#define SECTOR_FORMAT "%lu"
#endif
#define SECTOR_SHIFT 9
#define SECTOR_SIZE (1 << SECTOR_SHIFT)
extern struct block_device_operations dm_blk_dops;
/*
* List of devices that a metadevice uses and should open/close.
*/
struct dm_dev {
struct list_head list;
atomic_t count;
int mode;
kdev_t dev;
struct block_device *bdev;
};
struct dm_table;
struct mapped_device;
/*-----------------------------------------------------------------
* Functions for manipulating a struct mapped_device.
* Drop the reference with dm_put when you finish with the object.
*---------------------------------------------------------------*/
int dm_create(kdev_t dev, struct mapped_device **md);
/*
* Reference counting for md.
*/
void dm_get(struct mapped_device *md);
void dm_put(struct mapped_device *md);
/*
* A device can still be used while suspended, but I/O is deferred.
*/
int dm_suspend(struct mapped_device *md);
int dm_resume(struct mapped_device *md);
/*
* The device must be suspended before calling this method.
*/
int dm_swap_table(struct mapped_device *md, struct dm_table *t);
/*
* Drop a reference on the table when you've finished with the
* result.
*/
struct dm_table *dm_get_table(struct mapped_device *md);
/*
* Event functions.
*/
uint32_t dm_get_event_nr(struct mapped_device *md);
int dm_add_wait_queue(struct mapped_device *md, wait_queue_t *wq,
uint32_t event_nr);
void dm_remove_wait_queue(struct mapped_device *md, wait_queue_t *wq);
/*
* Info functions.
*/
kdev_t dm_kdev(struct mapped_device *md);
int dm_suspended(struct mapped_device *md);
/*-----------------------------------------------------------------
* Functions for manipulating a table. Tables are also reference
* counted.
*---------------------------------------------------------------*/
int dm_table_create(struct dm_table **result, int mode, unsigned num_targets);
void dm_table_get(struct dm_table *t);
void dm_table_put(struct dm_table *t);
int dm_table_add_target(struct dm_table *t, const char *type,
sector_t start, sector_t len, char *params);
int dm_table_complete(struct dm_table *t);
void dm_table_event_callback(struct dm_table *t,
void (*fn)(void *), void *context);
void dm_table_event(struct dm_table *t);
sector_t dm_table_get_size(struct dm_table *t);
struct dm_target *dm_table_get_target(struct dm_table *t, unsigned int index);
struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector);
unsigned int dm_table_get_num_targets(struct dm_table *t);
struct list_head *dm_table_get_devices(struct dm_table *t);
int dm_table_get_mode(struct dm_table *t);
void dm_table_suspend_targets(struct dm_table *t);
void dm_table_resume_targets(struct dm_table *t);
/*-----------------------------------------------------------------
* A registry of target types.
*---------------------------------------------------------------*/
int dm_target_init(void);
void dm_target_exit(void);
struct target_type *dm_get_target_type(const char *name);
void dm_put_target_type(struct target_type *t);
int dm_target_iterate(void (*iter_func)(struct target_type *tt,
void *param), void *param);
/*-----------------------------------------------------------------
* Useful inlines.
*---------------------------------------------------------------*/
static inline int array_too_big(unsigned long fixed, unsigned long obj,
unsigned long num)
{
return (num > (ULONG_MAX - fixed) / obj);
}
/*
* ceiling(n / size) * size
*/
static inline unsigned long dm_round_up(unsigned long n, unsigned long size)
{
unsigned long r = n % size;
return n + (r ? (size - r) : 0);
}
/*
* Ceiling(n / size)
*/
static inline unsigned long dm_div_up(unsigned long n, unsigned long size)
{
return dm_round_up(n, size) / size;
}
const char *dm_kdevname(kdev_t dev);
/*
* The device-mapper can be driven through one of two interfaces;
* ioctl or filesystem, depending which patch you have applied.
*/
int dm_interface_init(void);
void dm_interface_exit(void);
/*
* Targets for linear and striped mappings
*/
int dm_linear_init(void);
void dm_linear_exit(void);
int dm_stripe_init(void);
void dm_stripe_exit(void);
int dm_snapshot_init(void);
void dm_snapshot_exit(void);
#endif

View File

@ -0,0 +1,666 @@
/*
* Copyright (C) 2002 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include <asm/atomic.h>
#include <linux/blkdev.h>
#include <linux/config.h>
#include <linux/device-mapper.h>
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/list.h>
#include <linux/locks.h>
#include <linux/mempool.h>
#include <linux/module.h>
#include <linux/pagemap.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include "kcopyd.h"
#include "dm-daemon.h"
/* FIXME: this is only needed for the DMERR macros */
#include "dm.h"
static struct dm_daemon _kcopyd;
#define SECTORS_PER_PAGE (PAGE_SIZE / SECTOR_SIZE)
#define SUB_JOB_SIZE 128
#define PAGES_PER_SUB_JOB (SUB_JOB_SIZE / SECTORS_PER_PAGE)
#define SUB_JOB_COUNT 8
/*-----------------------------------------------------------------
* Each kcopyd client has its own little pool of preallocated
* pages for kcopyd io.
*---------------------------------------------------------------*/
struct kcopyd_client {
struct list_head list;
spinlock_t lock;
struct list_head pages;
unsigned int nr_pages;
unsigned int nr_free_pages;
unsigned int max_split;
};
static inline void __push_page(struct kcopyd_client *kc, struct page *p)
{
list_add(&p->list, &kc->pages);
kc->nr_free_pages++;
}
static inline struct page *__pop_page(struct kcopyd_client *kc)
{
struct page *p;
p = list_entry(kc->pages.next, struct page, list);
list_del(&p->list);
kc->nr_free_pages--;
return p;
}
static int kcopyd_get_pages(struct kcopyd_client *kc,
unsigned int nr, struct list_head *pages)
{
struct page *p;
INIT_LIST_HEAD(pages);
spin_lock(&kc->lock);
if (kc->nr_free_pages < nr) {
spin_unlock(&kc->lock);
return -ENOMEM;
}
while (nr--) {
p = __pop_page(kc);
list_add(&p->list, pages);
}
spin_unlock(&kc->lock);
return 0;
}
static void kcopyd_put_pages(struct kcopyd_client *kc, struct list_head *pages)
{
struct list_head *tmp, *tmp2;
spin_lock(&kc->lock);
list_for_each_safe (tmp, tmp2, pages)
__push_page(kc, list_entry(tmp, struct page, list));
spin_unlock(&kc->lock);
}
/*
* These three functions resize the page pool.
*/
static void release_pages(struct list_head *pages)
{
struct page *p;
struct list_head *tmp, *tmp2;
list_for_each_safe (tmp, tmp2, pages) {
p = list_entry(tmp, struct page, list);
UnlockPage(p);
__free_page(p);
}
}
static int client_alloc_pages(struct kcopyd_client *kc, unsigned int nr)
{
unsigned int i;
struct page *p;
LIST_HEAD(new);
for (i = 0; i < nr; i++) {
p = alloc_page(GFP_KERNEL);
if (!p) {
release_pages(&new);
return -ENOMEM;
}
LockPage(p);
list_add(&p->list, &new);
}
kcopyd_put_pages(kc, &new);
kc->nr_pages += nr;
kc->max_split = kc->nr_pages / PAGES_PER_SUB_JOB;
if (kc->max_split > SUB_JOB_COUNT)
kc->max_split = SUB_JOB_COUNT;
return 0;
}
static void client_free_pages(struct kcopyd_client *kc)
{
BUG_ON(kc->nr_free_pages != kc->nr_pages);
release_pages(&kc->pages);
kc->nr_free_pages = kc->nr_pages = 0;
}
/*-----------------------------------------------------------------
* kcopyd_jobs need to be allocated by the *clients* of kcopyd,
* for this reason we use a mempool to prevent the client from
* ever having to do io (which could cause a deadlock).
*---------------------------------------------------------------*/
struct kcopyd_job {
struct kcopyd_client *kc;
struct list_head list;
unsigned int flags;
/*
* Error state of the job.
*/
int read_err;
unsigned int write_err;
/*
* Either READ or WRITE
*/
int rw;
struct io_region source;
/*
* The destinations for the transfer.
*/
unsigned int num_dests;
struct io_region dests[KCOPYD_MAX_REGIONS];
sector_t offset;
unsigned int nr_pages;
struct list_head pages;
/*
* Set this to ensure you are notified when the job has
* completed. 'context' is for callback to use.
*/
kcopyd_notify_fn fn;
void *context;
/*
* These fields are only used if the job has been split
* into more manageable parts.
*/
struct semaphore lock;
atomic_t sub_jobs;
sector_t progress;
};
/* FIXME: this should scale with the number of pages */
#define MIN_JOBS 512
static kmem_cache_t *_job_cache;
static mempool_t *_job_pool;
/*
* We maintain three lists of jobs:
*
* i) jobs waiting for pages
* ii) jobs that have pages, and are waiting for the io to be issued.
* iii) jobs that have completed.
*
* All three of these are protected by job_lock.
*/
static spinlock_t _job_lock = SPIN_LOCK_UNLOCKED;
static LIST_HEAD(_complete_jobs);
static LIST_HEAD(_io_jobs);
static LIST_HEAD(_pages_jobs);
static int jobs_init(void)
{
INIT_LIST_HEAD(&_complete_jobs);
INIT_LIST_HEAD(&_io_jobs);
INIT_LIST_HEAD(&_pages_jobs);
_job_cache = kmem_cache_create("kcopyd-jobs",
sizeof(struct kcopyd_job),
__alignof__(struct kcopyd_job),
0, NULL, NULL);
if (!_job_cache)
return -ENOMEM;
_job_pool = mempool_create(MIN_JOBS, mempool_alloc_slab,
mempool_free_slab, _job_cache);
if (!_job_pool) {
kmem_cache_destroy(_job_cache);
return -ENOMEM;
}
return 0;
}
static void jobs_exit(void)
{
BUG_ON(!list_empty(&_complete_jobs));
BUG_ON(!list_empty(&_io_jobs));
BUG_ON(!list_empty(&_pages_jobs));
mempool_destroy(_job_pool);
kmem_cache_destroy(_job_cache);
}
/*
* Functions to push and pop a job onto the head of a given job
* list.
*/
static inline struct kcopyd_job *pop(struct list_head *jobs)
{
struct kcopyd_job *job = NULL;
unsigned long flags;
spin_lock_irqsave(&_job_lock, flags);
if (!list_empty(jobs)) {
job = list_entry(jobs->next, struct kcopyd_job, list);
list_del(&job->list);
}
spin_unlock_irqrestore(&_job_lock, flags);
return job;
}
static inline void push(struct list_head *jobs, struct kcopyd_job *job)
{
unsigned long flags;
spin_lock_irqsave(&_job_lock, flags);
list_add_tail(&job->list, jobs);
spin_unlock_irqrestore(&_job_lock, flags);
}
/*
* These three functions process 1 item from the corresponding
* job list.
*
* They return:
* < 0: error
* 0: success
* > 0: can't process yet.
*/
static int run_complete_job(struct kcopyd_job *job)
{
void *context = job->context;
int read_err = job->read_err;
unsigned int write_err = job->write_err;
kcopyd_notify_fn fn = job->fn;
kcopyd_put_pages(job->kc, &job->pages);
mempool_free(job, _job_pool);
fn(read_err, write_err, context);
return 0;
}
static void complete_io(unsigned int error, void *context)
{
struct kcopyd_job *job = (struct kcopyd_job *) context;
if (error) {
if (job->rw == WRITE)
job->write_err &= error;
else
job->read_err = 1;
if (!test_bit(KCOPYD_IGNORE_ERROR, &job->flags)) {
push(&_complete_jobs, job);
dm_daemon_wake(&_kcopyd);
return;
}
}
if (job->rw == WRITE)
push(&_complete_jobs, job);
else {
job->rw = WRITE;
push(&_io_jobs, job);
}
dm_daemon_wake(&_kcopyd);
}
/*
* Request io on as many buffer heads as we can currently get for
* a particular job.
*/
static int run_io_job(struct kcopyd_job *job)
{
int r;
if (job->rw == READ)
r = dm_io_async(1, &job->source, job->rw,
list_entry(job->pages.next, struct page, list),
job->offset, complete_io, job);
else
r = dm_io_async(job->num_dests, job->dests, job->rw,
list_entry(job->pages.next, struct page, list),
job->offset, complete_io, job);
return r;
}
static int run_pages_job(struct kcopyd_job *job)
{
int r;
job->nr_pages = dm_div_up(job->dests[0].count + job->offset,
SECTORS_PER_PAGE);
r = kcopyd_get_pages(job->kc, job->nr_pages, &job->pages);
if (!r) {
/* this job is ready for io */
push(&_io_jobs, job);
return 0;
}
if (r == -ENOMEM)
/* can't complete now */
return 1;
return r;
}
/*
* Run through a list for as long as possible. Returns the count
* of successful jobs.
*/
static int process_jobs(struct list_head *jobs, int (*fn) (struct kcopyd_job *))
{
struct kcopyd_job *job;
int r, count = 0;
while ((job = pop(jobs))) {
r = fn(job);
if (r < 0) {
/* error this rogue job */
if (job->rw == WRITE)
job->write_err = (unsigned int) -1;
else
job->read_err = 1;
push(&_complete_jobs, job);
break;
}
if (r > 0) {
/*
* We couldn't service this job ATM, so
* push this job back onto the list.
*/
push(jobs, job);
break;
}
count++;
}
return count;
}
/*
* kcopyd does this every time it's woken up.
*/
static void do_work(void)
{
/*
* The order that these are called is *very* important.
* complete jobs can free some pages for pages jobs.
* Pages jobs when successful will jump onto the io jobs
* list. io jobs call wake when they complete and it all
* starts again.
*/
process_jobs(&_complete_jobs, run_complete_job);
process_jobs(&_pages_jobs, run_pages_job);
process_jobs(&_io_jobs, run_io_job);
run_task_queue(&tq_disk);
}
/*
* If we are copying a small region we just dispatch a single job
* to do the copy, otherwise the io has to be split up into many
* jobs.
*/
static void dispatch_job(struct kcopyd_job *job)
{
push(&_pages_jobs, job);
dm_daemon_wake(&_kcopyd);
}
static void segment_complete(int read_err,
unsigned int write_err, void *context)
{
/* FIXME: tidy this function */
sector_t progress = 0;
sector_t count = 0;
struct kcopyd_job *job = (struct kcopyd_job *) context;
down(&job->lock);
/* update the error */
if (read_err)
job->read_err = 1;
if (write_err)
job->write_err &= write_err;
/*
* Only dispatch more work if there hasn't been an error.
*/
if ((!job->read_err && !job->write_err) ||
test_bit(KCOPYD_IGNORE_ERROR, &job->flags)) {
/* get the next chunk of work */
progress = job->progress;
count = job->source.count - progress;
if (count) {
if (count > SUB_JOB_SIZE)
count = SUB_JOB_SIZE;
job->progress += count;
}
}
up(&job->lock);
if (count) {
int i;
struct kcopyd_job *sub_job = mempool_alloc(_job_pool, GFP_NOIO);
memcpy(sub_job, job, sizeof(*job));
sub_job->source.sector += progress;
sub_job->source.count = count;
for (i = 0; i < job->num_dests; i++) {
sub_job->dests[i].sector += progress;
sub_job->dests[i].count = count;
}
sub_job->fn = segment_complete;
sub_job->context = job;
dispatch_job(sub_job);
} else if (atomic_dec_and_test(&job->sub_jobs)) {
/*
* To avoid a race we must keep the job around
* until after the notify function has completed.
* Otherwise the client may try and stop the job
* after we've completed.
*/
job->fn(read_err, write_err, job->context);
mempool_free(job, _job_pool);
}
}
/*
* Create some little jobs that will do the move between
* them.
*/
static void split_job(struct kcopyd_job *job)
{
int nr;
nr = dm_div_up(job->source.count, SUB_JOB_SIZE);
if (nr > job->kc->max_split)
nr = job->kc->max_split;
atomic_set(&job->sub_jobs, nr);
while (nr--)
segment_complete(0, 0u, job);
}
int kcopyd_copy(struct kcopyd_client *kc, struct io_region *from,
unsigned int num_dests, struct io_region *dests,
unsigned int flags, kcopyd_notify_fn fn, void *context)
{
struct kcopyd_job *job;
/*
* Allocate a new job.
*/
job = mempool_alloc(_job_pool, GFP_NOIO);
/*
* set up for the read.
*/
job->kc = kc;
job->flags = flags;
job->read_err = 0;
job->write_err = 0;
job->rw = READ;
memcpy(&job->source, from, sizeof(*from));
job->num_dests = num_dests;
memcpy(&job->dests, dests, sizeof(*dests) * num_dests);
job->offset = 0;
job->nr_pages = 0;
INIT_LIST_HEAD(&job->pages);
job->fn = fn;
job->context = context;
if (job->source.count < SUB_JOB_SIZE)
dispatch_job(job);
else {
init_MUTEX(&job->lock);
job->progress = 0;
split_job(job);
}
return 0;
}
/*
* Cancels a kcopyd job, eg. someone might be deactivating a
* mirror.
*/
int kcopyd_cancel(struct kcopyd_job *job, int block)
{
/* FIXME: finish */
return -1;
}
/*-----------------------------------------------------------------
* Unit setup
*---------------------------------------------------------------*/
static DECLARE_MUTEX(_client_lock);
static LIST_HEAD(_clients);
static int client_add(struct kcopyd_client *kc)
{
down(&_client_lock);
list_add(&kc->list, &_clients);
up(&_client_lock);
return 0;
}
static void client_del(struct kcopyd_client *kc)
{
down(&_client_lock);
list_del(&kc->list);
up(&_client_lock);
}
int kcopyd_client_create(unsigned int nr_pages, struct kcopyd_client **result)
{
int r = 0;
struct kcopyd_client *kc;
if (nr_pages * SECTORS_PER_PAGE < SUB_JOB_SIZE) {
DMERR("kcopyd client requested %u pages: minimum is %lu",
nr_pages, SUB_JOB_SIZE / SECTORS_PER_PAGE);
return -ENOMEM;
}
kc = kmalloc(sizeof(*kc), GFP_KERNEL);
if (!kc)
return -ENOMEM;
kc->lock = SPIN_LOCK_UNLOCKED;
INIT_LIST_HEAD(&kc->pages);
kc->nr_pages = kc->nr_free_pages = 0;
r = client_alloc_pages(kc, nr_pages);
if (r) {
kfree(kc);
return r;
}
r = dm_io_get(nr_pages);
if (r) {
client_free_pages(kc);
kfree(kc);
return r;
}
r = client_add(kc);
if (r) {
dm_io_put(nr_pages);
client_free_pages(kc);
kfree(kc);
return r;
}
*result = kc;
return 0;
}
void kcopyd_client_destroy(struct kcopyd_client *kc)
{
dm_io_put(kc->nr_pages);
client_free_pages(kc);
client_del(kc);
kfree(kc);
}
int __init kcopyd_init(void)
{
int r;
r = jobs_init();
if (r)
return r;
r = dm_daemon_start(&_kcopyd, "kcopyd", do_work);
if (r)
jobs_exit();
return r;
}
void kcopyd_exit(void)
{
jobs_exit();
dm_daemon_stop(&_kcopyd);
}
EXPORT_SYMBOL(kcopyd_client_create);
EXPORT_SYMBOL(kcopyd_client_destroy);
EXPORT_SYMBOL(kcopyd_copy);
EXPORT_SYMBOL(kcopyd_cancel);

View File

@ -0,0 +1,637 @@
/*
* kcopyd.c
*
* Copyright (C) 2002 Sistina Software (UK) Limited.
*
* This file is released under the GPL.
*/
#include <linux/config.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include <linux/device-mapper.h>
#include "dm.h"
/* Hard sector size used all over the kernel */
#define SECTOR_SIZE 512
/* Number of entries in the free list */
#define FREE_LIST_SIZE 32
/* Number of iobufs we have, therefore the number of I/Os we
can be doing at once */
#define NUM_IOBUFS 16
/* Slab cache for work entries when the freelist runs out */
static kmem_cache_t *entry_cachep;
/* Structure of work to do in the list */
struct copy_work
{
unsigned long fromsec;
unsigned long tosec;
unsigned long nr_sectors;
unsigned long done_sectors;
kdev_t fromdev;
kdev_t todev;
int throttle;
int priority; /* 0=highest */
void (*callback)(copy_cb_reason_t, void *, long);
void *context; /* Parameter for callback */
int freelist; /* Whether we came from the free list */
struct iobuf_entry *iobuf;
struct list_head list;
};
/* The free list of iobufs */
struct iobuf_entry
{
struct kiobuf *iobuf;
struct copy_work *work; /* Work entry we are doing */
struct list_head list;
copy_cb_reason_t complete_reason;
long nr_sectors;
int rw;
};
static LIST_HEAD(work_list); /* Work to do or multiple-read blocks in progress */
static LIST_HEAD(write_list); /* Writes to do */
static LIST_HEAD(free_list); /* Free work units */
static LIST_HEAD(iobuf_list); /* Free iobufs */
static LIST_HEAD(complete_list); /* work entries completed waiting notification */
static struct task_struct *copy_task = NULL;
static struct rw_semaphore work_list_lock;
static struct rw_semaphore free_list_lock;
static spinlock_t write_list_spinlock = SPIN_LOCK_UNLOCKED;
static spinlock_t complete_list_spinlock = SPIN_LOCK_UNLOCKED;
static DECLARE_MUTEX(start_lock);
static DECLARE_MUTEX(run_lock);
static DECLARE_WAIT_QUEUE_HEAD(start_waitq);
static DECLARE_WAIT_QUEUE_HEAD(work_waitq);
static DECLARE_WAIT_QUEUE_HEAD(freelist_waitq);
static int thread_exit = 0;
/* Find a free entry from the free-list or allocate a new one.
This routine always returns a valid pointer even if it has to wait
for it */
static struct copy_work *get_work_struct(void)
{
struct copy_work *entry = NULL;
while (!entry) {
down_write(&free_list_lock);
if (!list_empty(&free_list)) {
entry = list_entry(free_list.next, struct copy_work, list);
list_del(&entry->list);
}
up_write(&free_list_lock);
/* Nothing on the free-list - try to allocate one without doing IO */
if (!entry) {
entry = kmem_cache_alloc(entry_cachep, GFP_NOIO);
/* Make sure we know it didn't come from the free list */
if (entry) {
entry->freelist = 0;
}
}
/* Failed...wait for IO to finish */
if (!entry) {
DECLARE_WAITQUEUE(wq, current);
set_task_state(current, TASK_INTERRUPTIBLE);
add_wait_queue(&freelist_waitq, &wq);
if (list_empty(&free_list))
schedule();
set_task_state(current, TASK_RUNNING);
remove_wait_queue(&freelist_waitq, &wq);
}
}
return entry;
}
/* Add a new entry to the work list - in priority+FIFO order.
The work_list_lock semaphore must be held */
static void add_to_work_list(struct copy_work *item)
{
struct list_head *entry;
list_for_each(entry, &work_list) {
struct copy_work *cw;
cw = list_entry(entry, struct copy_work, list);
if (cw->priority > item->priority) {
__list_add(&item->list, cw->list.prev, &cw->list);
return;
}
}
list_add_tail(&item->list, &work_list);
}
/* Called when the kio has finished - add the used bits back to their
free lists and notify the user */
static void end_copy(struct iobuf_entry *ioe, copy_cb_reason_t reason)
{
/* Tell the caller */
if (ioe->work->callback)
ioe->work->callback(reason, ioe->work->context, ioe->work->done_sectors);
down_write(&free_list_lock);
if (ioe->work->freelist) {
list_add(&ioe->work->list, &free_list);
}
else {
kmem_cache_free(entry_cachep, ioe->work);
}
list_add(&ioe->list, &iobuf_list);
up_write(&free_list_lock);
wake_up_interruptible(&freelist_waitq);
}
/* A single BH has completed */
static void end_bh(struct buffer_head *bh, int uptodate)
{
struct kiobuf *kiobuf = bh->b_private;
mark_buffer_uptodate(bh, uptodate);
unlock_buffer(bh);
if ((!uptodate) && !kiobuf->errno)
kiobuf->errno = -EIO;
/* Have all of them done ? */
if (atomic_dec_and_test(&kiobuf->io_count)) {
if (kiobuf->end_io)
kiobuf->end_io(kiobuf);
}
}
/* The whole iobuf has finished */
static void end_kiobuf(struct kiobuf *iobuf)
{
struct iobuf_entry *ioe;
/* Now, where did we leave that pointer...ah yes... */
ioe = (struct iobuf_entry *)iobuf->blocks[0];
if (ioe->rw == READ) {
if (iobuf->errno) {
ioe->complete_reason = COPY_CB_FAILED_READ;
spin_lock_irq(&complete_list_spinlock);
list_add(&ioe->list, &complete_list);
spin_unlock_irq(&complete_list_spinlock);
wake_up_interruptible(&work_waitq);
}
else {
/* Put it on the write list */
spin_lock_irq(&write_list_spinlock);
list_add(&ioe->work->list, &write_list);
spin_unlock_irq(&write_list_spinlock);
wake_up_interruptible(&work_waitq);
}
}
else {
/* WRITE */
if (iobuf->errno) {
ioe->complete_reason = COPY_CB_FAILED_WRITE;
spin_lock_irq(&complete_list_spinlock);
list_add(&ioe->list, &complete_list);
spin_unlock_irq(&complete_list_spinlock);
wake_up_interruptible(&work_waitq);
}
else {
/* All went well */
ioe->work->done_sectors += ioe->nr_sectors;
/* If not finished yet then do a progress callback */
if (ioe->work->done_sectors < ioe->work->nr_sectors) {
if (ioe->work->callback)
ioe->work->callback(COPY_CB_PROGRESS, ioe->work->context, ioe->work->done_sectors);
/* Put it back in the queue */
down_write(&work_list_lock);
add_to_work_list(ioe->work);
up_write(&work_list_lock);
wake_up_interruptible(&work_waitq);
}
else {
ioe->complete_reason = COPY_CB_COMPLETE;
spin_lock_irq(&complete_list_spinlock);
list_add(&ioe->list, &complete_list);
spin_unlock_irq(&complete_list_spinlock);
wake_up_interruptible(&work_waitq);
}
}
}
}
/* Asynchronous simplified version of brw_kiovec */
static int brw_kiobuf_async(int rw, struct iobuf_entry *ioe, unsigned long blocknr, kdev_t dev)
{
int r, length, pi, bi = 0, offset, bsize;
int nr_pages, nr_blocks;
struct page *map;
struct buffer_head *bh = 0;
struct buffer_head **bhs = 0;
length = ioe->iobuf->length;
ioe->iobuf->errno = 0;
bhs = ioe->iobuf->bh;
bsize = get_hardsect_size(dev);
nr_pages = length / PAGE_SIZE;
nr_blocks = ioe->nr_sectors / (bsize/SECTOR_SIZE);
/* Squirrel our pointer away somewhere secret */
ioe->iobuf->blocks[0] = (long)ioe;
ioe->iobuf->end_io = end_kiobuf;
for (pi = 0; pi < nr_pages; pi++) {
if (!(map = ioe->iobuf->maplist[pi])) {
r = -EFAULT;
goto bad;
}
offset = 0;
while (offset < PAGE_SIZE) {
bh = bhs[bi++];
bh->b_dev = B_FREE;
bh->b_size = bsize;
set_bh_page(bh, map, offset);
bh->b_this_page = bh;
init_buffer(bh, end_bh, ioe->iobuf);
bh->b_dev = dev;
bh->b_blocknr = blocknr++;
bh->b_private = ioe->iobuf;
bh->b_state = ((1 << BH_Mapped) |
(1 << BH_Lock) |
(1 << BH_Req));
set_bit(BH_Uptodate, &bh->b_state);
if (rw == WRITE)
clear_bit(BH_Dirty, &bh->b_state);
offset += bsize;
atomic_inc(&ioe->iobuf->io_count);
submit_bh(rw, bh);
if (atomic_read(&ioe->iobuf->io_count) >= nr_blocks)
break;
}
}
return 0;
bad:
ioe->iobuf->errno = r;
return r;
}
/* Allocate pages for a kiobuf */
static int alloc_iobuf_pages(struct kiobuf *iobuf, int nr_sectors)
{
int nr_pages, err, i;
if (nr_sectors > KIO_MAX_SECTORS)
return -1;
nr_pages = nr_sectors / (PAGE_SIZE/SECTOR_SIZE);
err = expand_kiobuf(iobuf, nr_pages);
if (err) goto out;
err = -ENOMEM;
iobuf->locked = 1;
iobuf->nr_pages = 0;
for (i = 0; i < nr_pages; i++) {
struct page * page;
page = alloc_page(GFP_KERNEL);
if (!page) goto out;
iobuf->maplist[i] = page;
LockPage(page);
iobuf->nr_pages++;
}
iobuf->offset = 0;
err = 0;
out:
return err;
}
/* Read/write chunk of data */
static int do_io(int rw, struct iobuf_entry *ioe, kdev_t dev, unsigned long start, int nr_sectors)
{
int sectors_per_block;
int blocksize = get_hardsect_size(dev);
sectors_per_block = blocksize / SECTOR_SIZE;
start /= sectors_per_block;
ioe->iobuf->length = nr_sectors << 9;
ioe->rw = rw;
ioe->nr_sectors = nr_sectors;
return brw_kiobuf_async(rw, ioe, start, dev);
}
/* This is where all the real work happens */
static int copy_kthread(void *unused)
{
daemonize();
down(&run_lock);
strcpy(current->comm, "kcopyd");
copy_task = current;
wake_up_interruptible(&start_waitq);
do {
DECLARE_WAITQUEUE(wq, current);
struct task_struct *tsk = current;
struct list_head *entry, *temp;
/* First, check for outstanding writes to do */
spin_lock_irq(&write_list_spinlock);
list_for_each_safe(entry, temp, &write_list) {
struct copy_work *work_item = list_entry(entry, struct copy_work, list);
struct iobuf_entry *ioe = work_item->iobuf;
list_del(&work_item->list);
spin_unlock_irq(&write_list_spinlock);
/* OK we read the data, now write it to the target device */
if (do_io(WRITE, ioe, work_item->todev,
work_item->tosec + work_item->done_sectors,
ioe->nr_sectors) != 0) {
DMERR("Write blocks to device %s failed", kdevname(work_item->todev));
end_copy(ioe, COPY_CB_FAILED_WRITE);
}
spin_lock_irq(&write_list_spinlock);
}
spin_unlock_irq(&write_list_spinlock);
/* Now look for new work, remember the list is in priority order */
down_write(&work_list_lock);
while (!list_empty(&work_list) && !list_empty(&iobuf_list)) {
struct copy_work *work_item = list_entry(work_list.next, struct copy_work, list);
struct iobuf_entry *ioe = list_entry(iobuf_list.next, struct iobuf_entry, list);
long nr_sectors = min((unsigned long)KIO_MAX_SECTORS,
work_item->nr_sectors - work_item->done_sectors);
list_del(&work_item->list);
list_del(&ioe->list);
up_write(&work_list_lock);
/* Exchange pointers, this is legal for structures over 16 */
ioe->work = work_item;
work_item->iobuf = ioe;
/* Read original blocks */
if (do_io(READ, ioe, work_item->fromdev, work_item->fromsec + work_item->done_sectors,
nr_sectors) != 0) {
DMERR("Read blocks from device %s failed", kdevname(work_item->fromdev));
end_copy(ioe, COPY_CB_FAILED_READ);
}
/* Get the work lock again for the top of the while loop */
down_write(&work_list_lock);
}
up_write(&work_list_lock);
/* Wait for more work */
set_task_state(tsk, TASK_INTERRUPTIBLE);
add_wait_queue(&work_waitq, &wq);
/* No work, or nothing to do it with */
if ( (list_empty(&work_list) || list_empty(&iobuf_list)) &&
list_empty(&complete_list) &&
list_empty(&write_list))
schedule();
set_task_state(tsk, TASK_RUNNING);
remove_wait_queue(&work_waitq, &wq);
/* Check for completed entries and do the callbacks */
spin_lock_irq(&complete_list_spinlock);
list_for_each_safe(entry, temp, &complete_list) {
struct iobuf_entry *ioe = list_entry(entry, struct iobuf_entry, list);
list_del(&ioe->list);
spin_unlock_irq(&complete_list_spinlock);
end_copy(ioe, ioe->complete_reason);
spin_lock_irq(&complete_list_spinlock);
}
spin_unlock_irq(&complete_list_spinlock);
} while (thread_exit == 0);
up(&run_lock);
DMINFO("kcopyd shutting down");
return 0;
}
/* API entry point */
int dm_blockcopy(unsigned long fromsec, unsigned long tosec, unsigned long nr_sectors,
kdev_t fromdev, kdev_t todev,
int priority, int throttle, void (*callback)(copy_cb_reason_t, void *, long), void *context)
{
struct copy_work *newwork;
static pid_t thread_pid = 0;
long from_blocksize = get_hardsect_size(fromdev);
long to_blocksize = get_hardsect_size(todev);
/* Make sure the start sectors are on physical block boundaries */
if (fromsec % (from_blocksize/SECTOR_SIZE))
return -EINVAL;
if (tosec % (to_blocksize/SECTOR_SIZE))
return -EINVAL;
/* Start the thread if we don't have one already */
down(&start_lock);
if (copy_task == NULL) {
thread_pid = kernel_thread(copy_kthread, NULL, 0);
if (thread_pid > 0) {
DECLARE_WAITQUEUE(wq, current);
struct task_struct *tsk = current;
DMINFO("Started kcopyd thread, %d buffers", NUM_IOBUFS);
/* Wait for it to complete it's startup initialisation */
set_task_state(tsk, TASK_INTERRUPTIBLE);
add_wait_queue(&start_waitq, &wq);
if (!copy_task)
schedule();
set_task_state(tsk, TASK_RUNNING);
remove_wait_queue(&start_waitq, &wq);
}
else {
DMERR("Failed to start kcopyd thread");
up(&start_lock);
return -EAGAIN;
}
}
up(&start_lock);
/* This will wait until one is available */
newwork = get_work_struct();
newwork->fromsec = fromsec;
newwork->tosec = tosec;
newwork->fromdev = fromdev;
newwork->todev = todev;
newwork->nr_sectors = nr_sectors;
newwork->done_sectors = 0;
newwork->throttle = throttle;
newwork->priority = priority;
newwork->callback = callback;
newwork->context = context;
down_write(&work_list_lock);
add_to_work_list(newwork);
up_write(&work_list_lock);
wake_up_interruptible(&work_waitq);
return 0;
}
/* Pre-allocate some structures for the free list */
static int allocate_free_list(void)
{
int i;
struct copy_work *newwork;
for (i=0; i<FREE_LIST_SIZE; i++) {
newwork = kmalloc(sizeof(struct copy_work), GFP_KERNEL);
if (!newwork)
return i;
newwork->freelist = 1;
list_add(&newwork->list, &free_list);
}
return i;
}
static void free_iobufs(void)
{
struct list_head *entry, *temp;
list_for_each_safe(entry, temp, &iobuf_list) {
struct iobuf_entry *ioe = list_entry(entry, struct iobuf_entry, list);
unmap_kiobuf(ioe->iobuf);
free_kiovec(1, &ioe->iobuf);
list_del(&ioe->list);
}
}
int __init kcopyd_init(void)
{
int i;
init_rwsem(&work_list_lock);
init_rwsem(&free_list_lock);
init_MUTEX(&start_lock);
init_MUTEX(&run_lock);
for (i=0; i< NUM_IOBUFS; i++) {
struct iobuf_entry *entry = kmalloc(sizeof(struct iobuf_entry), GFP_KERNEL);
if (entry == NULL) {
DMERR("Unable to allocate memory for kiobuf");
free_iobufs();
return -1;
}
if (alloc_kiovec(1, &entry->iobuf)) {
DMERR("Unable to allocate kiobuf for kcopyd");
kfree(entry);
free_iobufs();
return -1;
}
if (alloc_iobuf_pages(entry->iobuf, KIO_MAX_SECTORS)) {
DMERR("Unable to allocate pages for kcopyd");
free_kiovec(1, &entry->iobuf);
kfree(entry);
free_iobufs();
return -1;
}
list_add(&entry->list, &iobuf_list);
}
entry_cachep = kmem_cache_create("kcopyd",
sizeof(struct copy_work),
__alignof__(struct copy_work),
0, NULL, NULL);
if (!entry_cachep) {
free_iobufs();
DMERR("Unable to allocate slab cache for kcopyd");
return -1;
}
if (allocate_free_list() == 0) {
free_iobufs();
kmem_cache_destroy(entry_cachep);
DMERR("Unable to allocate any work structures for the free list");
return -1;
}
return 0;
}
void kcopyd_exit(void)
{
struct list_head *entry, *temp;
thread_exit = 1;
wake_up_interruptible(&work_waitq);
/* Wait for the thread to finish */
down(&run_lock);
up(&run_lock);
/* Free the iobufs */
free_iobufs();
/* Free the free list */
list_for_each_safe(entry, temp, &free_list) {
struct copy_work *cw;
cw = list_entry(entry, struct copy_work, list);
list_del(&cw->list);
kfree(cw);
}
if (entry_cachep)
kmem_cache_destroy(entry_cachep);
}
EXPORT_SYMBOL(dm_blockcopy);
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -0,0 +1,47 @@
/*
* Copyright (C) 2001 Sistina Software
*
* This file is released under the GPL.
*/
#ifndef DM_KCOPYD_H
#define DM_KCOPYD_H
/*
* Needed for the definition of offset_t.
*/
#include <linux/device-mapper.h>
#include <linux/iobuf.h>
#include "dm-io.h"
int kcopyd_init(void);
void kcopyd_exit(void);
/* FIXME: make this configurable */
#define KCOPYD_MAX_REGIONS 8
#define KCOPYD_IGNORE_ERROR 1
/*
* To use kcopyd you must first create a kcopyd client object.
*/
struct kcopyd_client;
int kcopyd_client_create(unsigned int num_pages, struct kcopyd_client **result);
void kcopyd_client_destroy(struct kcopyd_client *kc);
/*
* Submit a copy job to kcopyd. This is built on top of the
* previous three fns.
*
* read_err is a boolean,
* write_err is a bitset, with 1 bit for each destination region
*/
typedef void (*kcopyd_notify_fn)(int read_err,
unsigned int write_err, void *context);
int kcopyd_copy(struct kcopyd_client *kc, struct io_region *from,
unsigned int num_dests, struct io_region *dests,
unsigned int flags, kcopyd_notify_fn fn, void *context);
#endif

View File

@ -0,0 +1,136 @@
/*
* dmfs-error.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#include "dm.h"
#include "dmfs.h"
#include <linux/list.h>
#include <linux/seq_file.h>
struct dmfs_error {
struct list_head list;
unsigned len;
char *msg;
};
static struct dmfs_error oom_error;
static struct list_head oom_list = {
next: &oom_error.list,
prev: &oom_error.list,
};
static struct dmfs_error oom_error = {
list: {next: &oom_list, prev:&oom_list},
len: 39,
msg: "Out of memory during creation of table\n",
};
int dmfs_error_revalidate(struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
struct inode *parent = dentry->d_parent->d_inode;
if (!list_empty(&DMFS_I(parent)->errors))
inode->i_size = 1;
else
inode->i_size = 0;
return 0;
}
void dmfs_add_error(struct inode *inode, unsigned num, char *str)
{
struct dmfs_i *dmi = DMFS_I(inode);
int len = strlen(str) + sizeof(struct dmfs_error) + 12;
struct dmfs_error *e = kmalloc(len, GFP_KERNEL);
if (e) {
e->msg = (char *)(e + 1);
e->len = sprintf(e->msg, "%8u: %s\n", num, str);
list_add(&e->list, &dmi->errors);
}
}
void dmfs_zap_errors(struct inode *inode)
{
struct dmfs_i *dmi = DMFS_I(inode);
struct dmfs_error *e;
while (!list_empty(&dmi->errors)) {
e = list_entry(dmi->errors.next, struct dmfs_error, list);
list_del(&e->list);
kfree(e);
}
}
static void *e_start(struct seq_file *e, loff_t *pos)
{
struct list_head *p;
loff_t n = *pos;
struct dmfs_i *dmi = e->context;
down(&dmi->sem);
if (dmi->status) {
list_for_each(p, &oom_list)
if (n-- == 0)
return list_entry(p, struct dmfs_error, list);
} else {
list_for_each(p, &dmi->errors)
if (n-- == 0)
return list_entry(p, struct dmfs_error, list);
}
return NULL;
}
static void *e_next(struct seq_file *e, void *v, loff_t *pos)
{
struct dmfs_i *dmi = e->context;
struct list_head *p = ((struct dmfs_error *)v)->list.next;
(*pos)++;
return (p == &dmi->errors) ||
(p == &oom_list) ? NULL : list_entry(p, struct dmfs_error, list);
}
static void e_stop(struct seq_file *e, void *v)
{
struct dmfs_i *dmi = e->context;
up(&dmi->sem);
}
static int show_error(struct seq_file *e, void *v)
{
struct dmfs_error *d = v;
seq_puts(e, d->msg);
return 0;
}
struct seq_operations dmfs_error_seq_ops = {
start: e_start,
next: e_next,
stop: e_stop,
show: show_error,
};

View File

@ -0,0 +1,256 @@
/*
* dmfs-lv.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
/* Heavily based upon ramfs */
#include "dm.h"
#include "dmfs.h"
#include <linux/seq_file.h>
struct dmfs_inode_info {
const char *name;
struct inode *(*create)(struct inode *, int, struct seq_operations *,
int);
struct seq_operations *seq_ops;
int type;
};
#define DMFS_SEQ(inode) ((struct seq_operations *)(inode)->u.generic_ip)
extern struct inode *dmfs_create_table(struct inode *, int,
struct seq_operations *, int);
extern struct seq_operations dmfs_error_seq_ops;
extern struct seq_operations dmfs_status_seq_ops;
extern struct seq_operations dmfs_suspend_seq_ops;
extern ssize_t dmfs_suspend_write(struct file *file, const char *buf,
size_t size, loff_t * ppos);
extern int dmfs_error_revalidate(struct dentry *dentry);
static int dmfs_seq_open(struct inode *inode, struct file *file)
{
int ret = seq_open(file, DMFS_SEQ(inode));
if (ret >= 0) {
struct seq_file *seq = file->private_data;
seq->context = DMFS_I(file->f_dentry->d_parent->d_inode);
}
return ret;
}
static int dmfs_no_fsync(struct file *file, struct dentry *dentry, int datasync)
{
return 0;
};
static struct file_operations dmfs_suspend_file_operations = {
open: dmfs_seq_open,
read: seq_read,
llseek: seq_lseek,
release: seq_release,
write: dmfs_suspend_write,
fsync: dmfs_no_fsync,
};
static struct inode_operations dmfs_null_inode_operations = {
};
static struct inode_operations dmfs_error_inode_operations = {
revalidate: dmfs_error_revalidate
};
static struct file_operations dmfs_seq_ro_file_operations = {
open: dmfs_seq_open,
read: seq_read,
llseek: seq_lseek,
release: seq_release,
fsync: dmfs_no_fsync,
};
static struct inode *dmfs_create_seq_ro(struct inode *dir, int mode,
struct seq_operations *seq_ops, int dev)
{
struct inode *inode = dmfs_new_inode(dir->i_sb, mode | S_IFREG);
if (inode) {
inode->i_fop = &dmfs_seq_ro_file_operations;
inode->i_op = &dmfs_null_inode_operations;
DMFS_SEQ(inode) = seq_ops;
}
return inode;
}
static struct inode *dmfs_create_error(struct inode *dir, int mode,
struct seq_operations *seq_ops, int dev)
{
struct inode *inode = dmfs_new_inode(dir->i_sb, mode | S_IFREG);
if (inode) {
inode->i_fop = &dmfs_seq_ro_file_operations;
inode->i_op = &dmfs_error_inode_operations;
DMFS_SEQ(inode) = seq_ops;
}
return inode;
}
static struct inode *dmfs_create_device(struct inode *dir, int mode,
struct seq_operations *seq_ops, int dev)
{
struct inode *inode = dmfs_new_inode(dir->i_sb, mode | S_IFBLK);
if (inode) {
init_special_inode(inode, mode | S_IFBLK, dev);
}
return inode;
}
static struct inode *dmfs_create_suspend(struct inode *dir, int mode,
struct seq_operations *seq_ops,
int dev)
{
struct inode *inode = dmfs_create_seq_ro(dir, mode, seq_ops, dev);
if (inode) {
inode->i_fop = &dmfs_suspend_file_operations;
}
return inode;
}
static int dmfs_lv_unlink(struct inode *dir, struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
inode->i_mapping = &inode->i_data;
inode->i_nlink--;
return 0;
}
static struct dmfs_inode_info dmfs_ii[] = {
{".", NULL, NULL, DT_DIR},
{"..", NULL, NULL, DT_DIR},
{"table", dmfs_create_table, NULL, DT_REG},
{"error", dmfs_create_error, &dmfs_error_seq_ops, DT_REG},
{"status", dmfs_create_seq_ro, &dmfs_status_seq_ops, DT_REG},
{"device", dmfs_create_device, NULL, DT_BLK},
{"suspend", dmfs_create_suspend, &dmfs_suspend_seq_ops, DT_REG},
};
#define NR_DMFS_II (sizeof(dmfs_ii)/sizeof(struct dmfs_inode_info))
static struct dmfs_inode_info *dmfs_find_by_name(const char *n, int len)
{
int i;
for (i = 2; i < NR_DMFS_II; i++) {
if (strlen(dmfs_ii[i].name) != len)
continue;
if (memcmp(dmfs_ii[i].name, n, len) == 0)
return &dmfs_ii[i];
}
return NULL;
}
static struct dentry *dmfs_lv_lookup(struct inode *dir, struct dentry *dentry)
{
struct inode *inode = NULL;
struct dmfs_inode_info *ii;
ii = dmfs_find_by_name(dentry->d_name.name, dentry->d_name.len);
if (ii) {
int dev = kdev_t_to_nr(DMFS_I(dir)->md->dev);
inode = ii->create(dir, 0600, ii->seq_ops, dev);
}
d_add(dentry, inode);
return NULL;
}
static int dmfs_inum(int entry, struct dentry *dentry)
{
if (entry == 0)
return dentry->d_inode->i_ino;
if (entry == 1)
return dentry->d_parent->d_inode->i_ino;
return entry;
}
static int dmfs_lv_readdir(struct file *filp, void *dirent, filldir_t filldir)
{
struct dentry *dentry = filp->f_dentry;
struct dmfs_inode_info *ii;
while (filp->f_pos < NR_DMFS_II) {
ii = &dmfs_ii[filp->f_pos];
if (filldir(dirent, ii->name, strlen(ii->name), filp->f_pos,
dmfs_inum(filp->f_pos, dentry), ii->type) < 0)
break;
filp->f_pos++;
}
return 0;
}
static int dmfs_lv_sync(struct file *file, struct dentry *dentry, int datasync)
{
return 0;
}
static struct file_operations dmfs_lv_file_operations = {
read: generic_read_dir,
readdir: dmfs_lv_readdir,
fsync: dmfs_lv_sync,
};
static struct inode_operations dmfs_lv_inode_operations = {
lookup: dmfs_lv_lookup,
unlink: dmfs_lv_unlink,
};
struct inode *dmfs_create_lv(struct super_block *sb, int mode,
struct dentry *dentry)
{
struct inode *inode = dmfs_new_private_inode(sb, mode | S_IFDIR);
struct mapped_device *md;
const char *name = dentry->d_name.name;
char tmp_name[DM_NAME_LEN + 1];
struct dm_table *table;
int ret = -ENOMEM;
if (inode) {
ret = dm_table_create(&table);
if (!ret) {
ret = dm_table_complete(table);
if (!ret) {
inode->i_fop = &dmfs_lv_file_operations;
inode->i_op = &dmfs_lv_inode_operations;
memcpy(tmp_name, name, dentry->d_name.len);
tmp_name[dentry->d_name.len] = 0;
ret = dm_create(tmp_name, -1, table, &md);
if (!ret) {
DMFS_I(inode)->md = md;
md->suspended = 1;
return inode;
}
}
dm_table_destroy(table);
}
iput(inode);
}
return ERR_PTR(ret);
}

View File

@ -0,0 +1,157 @@
/*
* dmfs-root.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
/* Heavily based upon ramfs */
#include "dm.h"
#include "dmfs.h"
extern struct inode *dmfs_create_lv(struct super_block *sb, int mode,
struct dentry *dentry);
static int is_identifier(const char *str, int len)
{
while (len--) {
if (!isalnum(*str) && *str != '_')
return 0;
str++;
}
return 1;
}
static int dmfs_root_mkdir(struct inode *dir, struct dentry *dentry, int mode)
{
struct inode *inode;
if (dentry->d_name.len >= DM_NAME_LEN)
return -EINVAL;
if (!is_identifier(dentry->d_name.name, dentry->d_name.len))
return -EPERM;
if (dentry->d_name.name[0] == '.')
return -EINVAL;
inode = dmfs_create_lv(dir->i_sb, mode, dentry);
if (!IS_ERR(inode)) {
d_instantiate(dentry, inode);
dget(dentry);
return 0;
}
return PTR_ERR(inode);
}
/*
* if u.generic_ip is not NULL, then it indicates an inode which
* represents a table. If it is NULL then the inode is a virtual
* file and should be deleted along with the directory.
*/
static inline int positive(struct dentry *dentry)
{
return dentry->d_inode && !d_unhashed(dentry);
}
static int empty(struct dentry *dentry)
{
struct list_head *list;
spin_lock(&dcache_lock);
list = dentry->d_subdirs.next;
while (list != &dentry->d_subdirs) {
struct dentry *de = list_entry(list, struct dentry, d_child);
if (positive(de)) {
spin_unlock(&dcache_lock);
return 0;
}
list = list->next;
}
spin_unlock(&dcache_lock);
return 1;
}
static int dmfs_root_rmdir(struct inode *dir, struct dentry *dentry)
{
int ret = -ENOTEMPTY;
if (empty(dentry)) {
struct inode *inode = dentry->d_inode;
ret = dm_destroy(DMFS_I(inode)->md);
if (ret == 0) {
DMFS_I(inode)->md = NULL;
inode->i_nlink--;
dput(dentry);
}
}
return ret;
}
static struct dentry *dmfs_root_lookup(struct inode *dir, struct dentry *dentry)
{
d_add(dentry, NULL);
return NULL;
}
static int dmfs_root_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
/* Can only rename - not move between directories! */
if (old_dir != new_dir)
return -EPERM;
return -EINVAL; /* FIXME: a change of LV name here */
}
static int dmfs_root_sync(struct file *file, struct dentry *dentry,
int datasync)
{
return 0;
}
static struct file_operations dmfs_root_file_operations = {
read: generic_read_dir,
readdir: dcache_readdir,
fsync: dmfs_root_sync,
};
static struct inode_operations dmfs_root_inode_operations = {
lookup: dmfs_root_lookup,
mkdir: dmfs_root_mkdir,
rmdir: dmfs_root_rmdir,
rename: dmfs_root_rename,
};
struct inode *dmfs_create_root(struct super_block *sb, int mode)
{
struct inode *inode = dmfs_new_inode(sb, mode | S_IFDIR);
if (inode) {
inode->i_fop = &dmfs_root_file_operations;
inode->i_op = &dmfs_root_inode_operations;
}
return inode;
}

View File

@ -0,0 +1,52 @@
/*
* dmfs-status.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#include "dm.h"
#include "dmfs.h"
#include <linux/seq_file.h>
static void *s_start(struct seq_file *s, loff_t *pos)
{
return NULL;
}
static void *s_next(struct seq_file *s, void *v, loff_t *pos)
{
return NULL;
}
static void s_stop(struct seq_file *s, void *v)
{
return;
}
static int s_show(struct seq_file *s, void *v)
{
return 0;
}
struct seq_operations dmfs_status_seq_ops = {
start: s_start,
next: s_next,
stop: s_stop,
show: s_show,
};

View File

@ -0,0 +1,158 @@
/*
* dmfs-super.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#include "dm.h"
#include "dmfs.h"
#include <linux/init.h>
#include <linux/kmod.h>
#define DMFS_MAGIC 0x444D4653
extern struct inode *dmfs_create_root(struct super_block *sb, int);
static int dmfs_statfs(struct super_block *sb, struct statfs *buf)
{
buf->f_type = sb->s_magic;
buf->f_bsize = sb->s_blocksize;
buf->f_namelen = DM_NAME_LEN - 1;
return 0;
}
static void dmfs_delete_inode(struct inode *inode)
{
if (S_ISDIR(inode->i_mode)) {
struct dmfs_i *dmi = DMFS_I(inode);
if (dmi) {
if (dmi->md)
BUG();
if (!list_empty(&dmi->errors))
dmfs_zap_errors(inode);
kfree(dmi);
MOD_DEC_USE_COUNT; /* Don't remove */
}
}
inode->u.generic_ip = NULL;
clear_inode(inode);
}
static struct super_operations dmfs_super_operations = {
statfs: dmfs_statfs,
put_inode: force_delete,
delete_inode: dmfs_delete_inode,
};
static struct super_block *dmfs_read_super(struct super_block *sb, void *data,
int silent)
{
struct inode *inode;
struct dentry *root;
sb->s_blocksize = PAGE_CACHE_SIZE;
sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
sb->s_magic = DMFS_MAGIC;
sb->s_op = &dmfs_super_operations;
sb->s_maxbytes = MAX_NON_LFS;
inode = dmfs_create_root(sb, 0755);
if (IS_ERR(inode))
return NULL;
root = d_alloc_root(inode);
if (!root) {
iput(inode);
return NULL;
}
sb->s_root = root;
return sb;
}
struct inode *dmfs_new_inode(struct super_block *sb, int mode)
{
struct inode *inode = new_inode(sb);
if (inode) {
inode->i_mode = mode;
inode->i_uid = current->fsuid;
inode->i_gid = current->fsgid;
inode->i_blksize = PAGE_CACHE_SIZE;
inode->i_blocks = 0;
inode->i_rdev = NODEV;
inode->i_atime = inode->i_ctime = inode->i_mtime = CURRENT_TIME;
}
return inode;
}
struct inode *dmfs_new_private_inode(struct super_block *sb, int mode)
{
struct inode *inode = dmfs_new_inode(sb, mode);
struct dmfs_i *dmi;
if (inode) {
dmi = kmalloc(sizeof(struct dmfs_i), GFP_KERNEL);
if (dmi == NULL) {
iput(inode);
return NULL;
}
memset(dmi, 0, sizeof(struct dmfs_i));
init_MUTEX(&dmi->sem);
INIT_LIST_HEAD(&dmi->errors);
inode->u.generic_ip = dmi;
MOD_INC_USE_COUNT; /* Don't remove */
}
return inode;
}
static DECLARE_FSTYPE(dmfs_fstype, "dmfs", dmfs_read_super, FS_SINGLE);
static struct vfsmount *dmfs_mnt;
int __init dm_interface_init(void)
{
int ret;
ret = register_filesystem(&dmfs_fstype);
if (ret < 0)
goto out;
dmfs_mnt = kern_mount(&dmfs_fstype);
if (IS_ERR(dmfs_mnt)) {
ret = PTR_ERR(dmfs_mnt);
unregister_filesystem(&dmfs_fstype);
} else
MOD_DEC_USE_COUNT; /* Yes, this really is correct... */
out:
return ret;
}
void __exit dm_interface_exit(void)
{
MOD_INC_USE_COUNT; /* So that it lands up being zero */
do_umount(dmfs_mnt, 0);
unregister_filesystem(&dmfs_fstype);
}

View File

@ -0,0 +1,111 @@
/*
* dmfs-suspend.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#include "dm.h"
#include "dmfs.h"
#include <linux/seq_file.h>
static void *s_start(struct seq_file *s, loff_t *pos)
{
struct dmfs_i *dmi = s->context;
if (*pos > 0)
return NULL;
down(&dmi->sem);
return (void *)1;
}
static void *s_next(struct seq_file *s, void *v, loff_t *pos)
{
(*pos)++;
return NULL;
}
static void s_stop(struct seq_file *s, void *v)
{
struct dmfs_i *dmi = s->context;
up(&dmi->sem);
}
static int s_show(struct seq_file *s, void *v)
{
struct dmfs_i *dmi = s->context;
char msg[3] = "1\n";
if (dmi->md->suspended == 0) {
msg[0] = '0';
}
seq_puts(s, msg);
return 0;
}
struct seq_operations dmfs_suspend_seq_ops = {
start: s_start,
next: s_next,
stop: s_stop,
show: s_show,
};
ssize_t dmfs_suspend_write(struct file *file, const char *buf, size_t count,
loff_t * ppos)
{
struct inode *dir = file->f_dentry->d_parent->d_inode;
struct dmfs_i *dmi = DMFS_I(dir);
int written = 0;
if (count == 0)
goto out;
if (count != 1 && count != 2)
return -EINVAL;
if (buf[0] != '0' && buf[0] != '1')
return -EINVAL;
down(&dmi->sem);
if (buf[0] == '0') {
if (get_exclusive_write_access(dir)) {
written = -EPERM;
goto out_unlock;
}
if (!list_empty(&dmi->errors)) {
put_write_access(dir);
written = -EPERM;
goto out_unlock;
}
written = dm_resume(dmi->md);
put_write_access(dir);
}
if (buf[0] == '1')
written = dm_suspend(dmi->md);
if (written >= 0)
written = count;
out_unlock:
up(&dmi->sem);
out:
return written;
}

View File

@ -0,0 +1,386 @@
/*
* dmfs-table.c
*
* Copyright (C) 2001 Sistina Software
*
* This software is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GNU CC; see the file COPYING. If not, write to
* the Free Software Foundation, 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#include "dm.h"
#include "dmfs.h"
#include <linux/mm.h>
static offset_t start_of_next_range(struct dm_table *t)
{
offset_t n = 0;
if (t->num_targets) {
n = t->highs[t->num_targets - 1] + 1;
}
return n;
}
static char *dmfs_parse_line(struct dm_table *t, char *str)
{
offset_t start, size, high;
void *context;
struct target_type *ttype;
int rv = 0;
char *msg;
int pos = 0;
char target[33];
char *argv[MAX_ARGS];
int argc;
static char *err_table[] = {
"Missing/Invalid start argument",
"Missing/Invalid size argument",
"Missing target type"
};
rv = sscanf(str, "%d %d %32s%n", &start, &size, target, &pos);
if (rv < 3) {
msg = err_table[rv];
goto out;
}
str += pos;
while (*str && isspace(*str))
str++;
msg = "Gap in table";
if (start != start_of_next_range(t))
goto out;
msg = "Target type unknown";
ttype = dm_get_target_type(target);
if (ttype) {
msg = "Too many arguments";
rv = split_args(MAX_ARGS, &argc, argv, str);
if (rv < 0)
goto out;
msg = "This message should never appear (constructor error)";
rv = ttype->ctr(t, start, size, argc, argv, &context);
msg = context;
if (rv == 0) {
msg = "Error adding target to table";
high = start + (size - 1);
if (dm_table_add_target(t, high, ttype, context) == 0)
return NULL;
ttype->dtr(t, context);
}
dm_put_target_type(ttype);
}
out:
return msg;
}
static int dmfs_copy(char *dst, int dstlen, char *src, int srclen, int *flag)
{
int len = min(dstlen, srclen);
char *start = dst;
while (len) {
*dst = *src++;
if (*dst == '\n')
goto end_of_line;
dst++;
len--;
}
out:
return (dst - start);
end_of_line:
dst++;
*flag = 1;
goto out;
}
static int dmfs_line_is_not_comment(char *str)
{
while (*str) {
if (*str == '#')
break;
if (!isspace(*str))
return 1;
str++;
}
return 0;
}
struct dmfs_desc {
struct dm_table *table;
struct inode *inode;
char *tmp;
loff_t tmpl;
unsigned long lnum;
};
static int dmfs_read_actor(read_descriptor_t *desc, struct page *page,
unsigned long offset, unsigned long size)
{
char *buf, *msg;
unsigned long count = desc->count, len, copied;
struct dmfs_desc *d = (struct dmfs_desc *) desc->buf;
if (size > count)
size = count;
len = size;
buf = kmap(page);
do {
int flag = 0;
copied = dmfs_copy(d->tmp + d->tmpl, PAGE_SIZE - d->tmpl - 1,
buf + offset, len, &flag);
offset += copied;
len -= copied;
if (d->tmpl + copied == PAGE_SIZE - 1)
goto line_too_long;
d->tmpl += copied;
if (flag || (len == 0 && count == size)) {
*(d->tmp + d->tmpl) = 0;
if (dmfs_line_is_not_comment(d->tmp)) {
msg = dmfs_parse_line(d->table, d->tmp);
if (msg) {
dmfs_add_error(d->inode, d->lnum, msg);
}
}
d->lnum++;
d->tmpl = 0;
}
} while (len > 0);
kunmap(page);
desc->count = count - size;
desc->written += size;
return size;
line_too_long:
printk(KERN_INFO "dmfs_read_actor: Line %lu too long\n", d->lnum);
kunmap(page);
return 0;
}
static struct dm_table *dmfs_parse(struct inode *inode, struct file *filp)
{
struct dm_table *t = NULL;
unsigned long page;
struct dmfs_desc d;
loff_t pos = 0;
int r;
if (inode->i_size == 0)
return NULL;
page = __get_free_page(GFP_NOFS);
if (page) {
r = dm_table_create(&t);
if (!r) {
read_descriptor_t desc;
desc.written = 0;
desc.count = inode->i_size;
desc.buf = (char *) &d;
d.table = t;
d.inode = inode;
d.tmp = (char *) page;
d.tmpl = 0;
d.lnum = 1;
do_generic_file_read(filp, &pos, &desc,
dmfs_read_actor);
if (desc.written != inode->i_size) {
dm_table_destroy(t);
t = NULL;
}
if (!t || (t && !t->num_targets))
dmfs_add_error(d.inode, 0,
"No valid targets found");
}
free_page(page);
}
if (!list_empty(&DMFS_I(inode)->errors)) {
dm_table_destroy(t);
t = NULL;
}
return t;
}
static int dmfs_table_release(struct inode *inode, struct file *f)
{
struct dentry *dentry = f->f_dentry;
struct inode *parent = dentry->d_parent->d_inode;
struct dmfs_i *dmi = DMFS_I(parent);
struct dm_table *table;
if (f->f_mode & FMODE_WRITE) {
down(&dmi->sem);
dmfs_zap_errors(dentry->d_parent->d_inode);
table = dmfs_parse(dentry->d_parent->d_inode, f);
if (table) {
struct mapped_device *md = dmi->md;
int need_resume = 0;
if (md->suspended == 0) {
dm_suspend(md);
need_resume = 1;
}
dm_swap_table(md, table);
if (need_resume) {
dm_resume(md);
}
}
up(&dmi->sem);
put_write_access(parent);
}
return 0;
}
static int dmfs_readpage(struct file *file, struct page *page)
{
if (!Page_Uptodate(page)) {
memset(kmap(page), 0, PAGE_CACHE_SIZE);
kunmap(page);
flush_dcache_page(page);
SetPageUptodate(page);
}
UnlockPage(page);
return 0;
}
static int dmfs_prepare_write(struct file *file, struct page *page,
unsigned offset, unsigned to)
{
void *addr = kmap(page);
if (!Page_Uptodate(page)) {
memset(addr, 0, PAGE_CACHE_SIZE);
flush_dcache_page(page);
SetPageUptodate(page);
}
SetPageDirty(page);
return 0;
}
static int dmfs_commit_write(struct file *file, struct page *page,
unsigned offset, unsigned to)
{
struct inode *inode = page->mapping->host;
loff_t pos = ((loff_t) page->index << PAGE_CACHE_SHIFT) + to;
kunmap(page);
if (pos > inode->i_size)
inode->i_size = pos;
return 0;
}
/*
* There is a small race here in that two processes might call this at
* the same time and both fail. So its a fail safe race :-) This should
* move into namei.c (and thus use the spinlock and do this properly)
* at some stage if we continue to use this set of functions for ensuring
* exclusive write access to the file
*/
int get_exclusive_write_access(struct inode *inode)
{
if (get_write_access(inode))
return -1;
if (atomic_read(&inode->i_writecount) != 1) {
put_write_access(inode);
return -1;
}
return 0;
}
static int dmfs_table_open(struct inode *inode, struct file *file)
{
struct dentry *dentry = file->f_dentry;
struct inode *parent = dentry->d_parent->d_inode;
struct dmfs_i *dmi = DMFS_I(parent);
if (file->f_mode & FMODE_WRITE) {
if (get_exclusive_write_access(parent))
return -EPERM;
if (!dmi->md->suspended) {
put_write_access(parent);
return -EPERM;
}
}
return 0;
}
static int dmfs_table_sync(struct file *file, struct dentry *dentry,
int datasync)
{
return 0;
}
static int dmfs_table_revalidate(struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
struct inode *parent = dentry->d_parent->d_inode;
inode->i_size = parent->i_size;
return 0;
}
struct address_space_operations dmfs_address_space_operations = {
readpage: dmfs_readpage,
writepage: fail_writepage,
prepare_write: dmfs_prepare_write,
commit_write: dmfs_commit_write
};
static struct file_operations dmfs_table_file_operations = {
llseek: generic_file_llseek,
read: generic_file_read,
write: generic_file_write,
open: dmfs_table_open,
release: dmfs_table_release,
fsync: dmfs_table_sync
};
static struct inode_operations dmfs_table_inode_operations = {
revalidate: dmfs_table_revalidate
};
struct inode *dmfs_create_table(struct inode *dir, int mode)
{
struct inode *inode = dmfs_new_inode(dir->i_sb, mode | S_IFREG);
if (inode) {
inode->i_mapping = dir->i_mapping;
inode->i_mapping->a_ops = &dmfs_address_space_operations;
inode->i_fop = &dmfs_table_file_operations;
inode->i_op = &dmfs_table_inode_operations;
}
return inode;
}

View File

@ -0,0 +1,21 @@
#ifndef LINUX_DMFS_H
#define LINUX_DMFS_H
struct dmfs_i {
struct semaphore sem;
struct mapped_device *md;
struct list_head errors;
int status;
};
#define DMFS_I(inode) ((struct dmfs_i *)(inode)->u.generic_ip)
int get_exclusive_write_access(struct inode *inode);
extern struct inode *dmfs_new_inode(struct super_block *sb, int mode);
extern struct inode *dmfs_new_private_inode(struct super_block *sb, int mode);
extern void dmfs_add_error(struct inode *inode, unsigned num, char *str);
extern void dmfs_zap_errors(struct inode *inode);
#endif /* LINUX_DMFS_H */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,304 @@
/*
* Copyright (C) 2001 - 2003 Sistina Software (UK) Limited.
* Copyright (C) 2004 - 2005 Red Hat, Inc. All rights reserved.
*
* This file is released under the LGPL.
*/
#ifndef _LINUX_DM_IOCTL_V4_H
#define _LINUX_DM_IOCTL_V4_H
#ifdef linux
# include <linux/types.h>
#endif
#define DM_DIR "mapper" /* Slashes not supported */
#define DM_MAX_TYPE_NAME 16
#define DM_NAME_LEN 128
#define DM_UUID_LEN 129
/*
* A traditional ioctl interface for the device mapper.
*
* Each device can have two tables associated with it, an
* 'active' table which is the one currently used by io passing
* through the device, and an 'inactive' one which is a table
* that is being prepared as a replacement for the 'active' one.
*
* DM_VERSION:
* Just get the version information for the ioctl interface.
*
* DM_REMOVE_ALL:
* Remove all dm devices, destroy all tables. Only really used
* for debug.
*
* DM_LIST_DEVICES:
* Get a list of all the dm device names.
*
* DM_DEV_CREATE:
* Create a new device, neither the 'active' or 'inactive' table
* slots will be filled. The device will be in suspended state
* after creation, however any io to the device will get errored
* since it will be out-of-bounds.
*
* DM_DEV_REMOVE:
* Remove a device, destroy any tables.
*
* DM_DEV_RENAME:
* Rename a device.
*
* DM_SUSPEND:
* This performs both suspend and resume, depending which flag is
* passed in.
* Suspend: This command will not return until all pending io to
* the device has completed. Further io will be deferred until
* the device is resumed.
* Resume: It is no longer an error to issue this command on an
* unsuspended device. If a table is present in the 'inactive'
* slot, it will be moved to the active slot, then the old table
* from the active slot will be _destroyed_. Finally the device
* is resumed.
*
* DM_DEV_STATUS:
* Retrieves the status for the table in the 'active' slot.
*
* DM_DEV_WAIT:
* Wait for a significant event to occur to the device. This
* could either be caused by an event triggered by one of the
* targets of the table in the 'active' slot, or a table change.
*
* DM_TABLE_LOAD:
* Load a table into the 'inactive' slot for the device. The
* device does _not_ need to be suspended prior to this command.
*
* DM_TABLE_CLEAR:
* Destroy any table in the 'inactive' slot (ie. abort).
*
* DM_TABLE_DEPS:
* Return a set of device dependencies for the 'active' table.
*
* DM_TABLE_STATUS:
* Return the targets status for the 'active' table.
*
* DM_TARGET_MSG:
* Pass a message string to the target at a specific offset of a device.
*
* DM_DEV_SET_GEOMETRY:
* Set the geometry of a device by passing in a string in this format:
*
* "cylinders heads sectors_per_track start_sector"
*
* Beware that CHS geometry is nearly obsolete and only provided
* for compatibility with dm devices that can be booted by a PC
* BIOS. See struct hd_geometry for range limits. Also note that
* the geometry is erased if the device size changes.
*/
/*
* All ioctl arguments consist of a single chunk of memory, with
* this structure at the start. If a uuid is specified any
* lookup (eg. for a DM_INFO) will be done on that, *not* the
* name.
*/
struct dm_ioctl {
/*
* The version number is made up of three parts:
* major - no backward or forward compatibility,
* minor - only backwards compatible,
* patch - both backwards and forwards compatible.
*
* All clients of the ioctl interface should fill in the
* version number of the interface that they were
* compiled with.
*
* All recognised ioctl commands (ie. those that don't
* return -ENOTTY) fill out this field, even if the
* command failed.
*/
uint32_t version[3]; /* in/out */
uint32_t data_size; /* total size of data passed in
* including this struct */
uint32_t data_start; /* offset to start of data
* relative to start of this struct */
uint32_t target_count; /* in/out */
int32_t open_count; /* out */
uint32_t flags; /* in/out */
uint32_t event_nr; /* in/out */
uint32_t padding;
uint64_t dev; /* in/out */
char name[DM_NAME_LEN]; /* device name */
char uuid[DM_UUID_LEN]; /* unique identifier for
* the block device */
char data[7]; /* padding or data */
};
/*
* Used to specify tables. These structures appear after the
* dm_ioctl.
*/
struct dm_target_spec {
uint64_t sector_start;
uint64_t length;
int32_t status; /* used when reading from kernel only */
/*
* Location of the next dm_target_spec.
* - When specifying targets on a DM_TABLE_LOAD command, this value is
* the number of bytes from the start of the "current" dm_target_spec
* to the start of the "next" dm_target_spec.
* - When retrieving targets on a DM_TABLE_STATUS command, this value
* is the number of bytes from the start of the first dm_target_spec
* (that follows the dm_ioctl struct) to the start of the "next"
* dm_target_spec.
*/
uint32_t next;
char target_type[DM_MAX_TYPE_NAME];
/*
* Parameter string starts immediately after this object.
* Be careful to add padding after string to ensure correct
* alignment of subsequent dm_target_spec.
*/
};
/*
* Used to retrieve the target dependencies.
*/
struct dm_target_deps {
uint32_t count; /* Array size */
uint32_t padding; /* unused */
uint64_t dev[0]; /* out */
};
/*
* Used to get a list of all dm devices.
*/
struct dm_name_list {
uint64_t dev;
uint32_t next; /* offset to the next record from
the _start_ of this */
char name[0];
};
/*
* Used to retrieve the target versions
*/
struct dm_target_versions {
uint32_t next;
uint32_t version[3];
char name[0];
};
/*
* Used to pass message to a target
*/
struct dm_target_msg {
uint64_t sector; /* Device sector */
char message[0];
};
/*
* If you change this make sure you make the corresponding change
* to dm-ioctl.c:lookup_ioctl()
*/
enum {
/* Top level cmds */
DM_VERSION_CMD = 0,
DM_REMOVE_ALL_CMD,
DM_LIST_DEVICES_CMD,
/* device level cmds */
DM_DEV_CREATE_CMD,
DM_DEV_REMOVE_CMD,
DM_DEV_RENAME_CMD,
DM_DEV_SUSPEND_CMD,
DM_DEV_STATUS_CMD,
DM_DEV_WAIT_CMD,
/* Table level cmds */
DM_TABLE_LOAD_CMD,
DM_TABLE_CLEAR_CMD,
DM_TABLE_DEPS_CMD,
DM_TABLE_STATUS_CMD,
/* Added later */
DM_LIST_VERSIONS_CMD,
DM_TARGET_MSG_CMD,
DM_DEV_SET_GEOMETRY_CMD
};
#define DM_IOCTL 0xfd
#define DM_VERSION _IOWR(DM_IOCTL, DM_VERSION_CMD, struct dm_ioctl)
#define DM_REMOVE_ALL _IOWR(DM_IOCTL, DM_REMOVE_ALL_CMD, struct dm_ioctl)
#define DM_LIST_DEVICES _IOWR(DM_IOCTL, DM_LIST_DEVICES_CMD, struct dm_ioctl)
#define DM_DEV_CREATE _IOWR(DM_IOCTL, DM_DEV_CREATE_CMD, struct dm_ioctl)
#define DM_DEV_REMOVE _IOWR(DM_IOCTL, DM_DEV_REMOVE_CMD, struct dm_ioctl)
#define DM_DEV_RENAME _IOWR(DM_IOCTL, DM_DEV_RENAME_CMD, struct dm_ioctl)
#define DM_DEV_SUSPEND _IOWR(DM_IOCTL, DM_DEV_SUSPEND_CMD, struct dm_ioctl)
#define DM_DEV_STATUS _IOWR(DM_IOCTL, DM_DEV_STATUS_CMD, struct dm_ioctl)
#define DM_DEV_WAIT _IOWR(DM_IOCTL, DM_DEV_WAIT_CMD, struct dm_ioctl)
#define DM_TABLE_LOAD _IOWR(DM_IOCTL, DM_TABLE_LOAD_CMD, struct dm_ioctl)
#define DM_TABLE_CLEAR _IOWR(DM_IOCTL, DM_TABLE_CLEAR_CMD, struct dm_ioctl)
#define DM_TABLE_DEPS _IOWR(DM_IOCTL, DM_TABLE_DEPS_CMD, struct dm_ioctl)
#define DM_TABLE_STATUS _IOWR(DM_IOCTL, DM_TABLE_STATUS_CMD, struct dm_ioctl)
#define DM_LIST_VERSIONS _IOWR(DM_IOCTL, DM_LIST_VERSIONS_CMD, struct dm_ioctl)
#define DM_TARGET_MSG _IOWR(DM_IOCTL, DM_TARGET_MSG_CMD, struct dm_ioctl)
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
#define DM_VERSION_MINOR 13
#define DM_VERSION_PATCHLEVEL 0
#define DM_VERSION_EXTRA "-ioctl (2007-10-18)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */
#define DM_SUSPEND_FLAG (1 << 1) /* In/Out */
#define DM_PERSISTENT_DEV_FLAG (1 << 3) /* In */
/*
* Flag passed into ioctl STATUS command to get table information
* rather than current status.
*/
#define DM_STATUS_TABLE_FLAG (1 << 4) /* In */
/*
* Flags that indicate whether a table is present in either of
* the two table slots that a device has.
*/
#define DM_ACTIVE_PRESENT_FLAG (1 << 5) /* Out */
#define DM_INACTIVE_PRESENT_FLAG (1 << 6) /* Out */
/*
* Indicates that the buffer passed in wasn't big enough for the
* results.
*/
#define DM_BUFFER_FULL_FLAG (1 << 8) /* Out */
/*
* This flag is now ignored.
*/
#define DM_SKIP_BDGET_FLAG (1 << 9) /* In */
/*
* Set this to avoid attempting to freeze any filesystem when suspending.
*/
#define DM_SKIP_LOCKFS_FLAG (1 << 10) /* In */
/*
* Set this to suspend without flushing queued ios.
*/
#define DM_NOFLUSH_FLAG (1 << 11) /* In */
#endif /* _LINUX_DM_IOCTL_H */

View File

@ -0,0 +1,137 @@
dm_lib_release
dm_lib_exit
dm_driver_version
dm_create_dir
dm_fclose
dm_get_library_version
dm_log
dm_log_init
dm_log_init_verbose
dm_task_create
dm_task_destroy
dm_task_set_name
dm_task_set_uuid
dm_task_get_driver_version
dm_task_get_info
dm_task_get_deps
dm_task_get_name
dm_task_get_names
dm_task_get_versions
dm_task_get_uuid
dm_task_get_read_ahead
dm_task_set_ro
dm_task_set_newname
dm_task_set_event_nr
dm_task_set_major
dm_task_set_minor
dm_task_set_sector
dm_task_set_message
dm_task_set_uid
dm_task_set_gid
dm_task_set_mode
dm_task_set_read_ahead
dm_task_suppress_identical_reload
dm_task_add_target
dm_task_no_flush
dm_task_no_open_count
dm_task_skip_lockfs
dm_task_update_nodes
dm_task_run
dm_get_next_target
dm_set_dev_dir
dm_dir
dm_format_dev
dm_tree_create
dm_tree_free
dm_tree_add_dev
dm_tree_add_new_dev
dm_tree_node_get_name
dm_tree_node_get_uuid
dm_tree_node_get_info
dm_tree_node_get_context
dm_tree_node_num_children
dm_tree_node_num_parents
dm_tree_find_node
dm_tree_find_node_by_uuid
dm_tree_next_child
dm_tree_next_parent
dm_tree_deactivate_children
dm_tree_activate_children
dm_tree_preload_children
dm_tree_suspend_children
dm_tree_children_use_uuid
dm_tree_node_add_snapshot_origin_target
dm_tree_node_add_snapshot_target
dm_tree_node_add_error_target
dm_tree_node_add_zero_target
dm_tree_node_add_linear_target
dm_tree_node_add_striped_target
dm_tree_node_add_mirror_target
dm_tree_node_add_mirror_target_log
dm_tree_node_add_target_area
dm_tree_node_set_read_ahead
dm_tree_skip_lockfs
dm_tree_use_no_flush_suspend
dm_is_dm_major
dm_mknodes
dm_malloc_aux
dm_malloc_aux_debug
dm_strdup_aux
dm_free_aux
dm_realloc_aux
dm_dump_memory_debug
dm_bounds_check_debug
dm_pool_create
dm_pool_destroy
dm_pool_alloc
dm_pool_alloc_aligned
dm_pool_empty
dm_pool_free
dm_pool_begin_object
dm_pool_grow_object
dm_pool_end_object
dm_pool_abandon_object
dm_pool_strdup
dm_pool_strndup
dm_pool_zalloc
dm_bitset_create
dm_bitset_destroy
dm_bit_union
dm_bit_get_first
dm_bit_get_next
dm_hash_create
dm_hash_destroy
dm_hash_wipe
dm_hash_lookup
dm_hash_insert
dm_hash_remove
dm_hash_lookup_binary
dm_hash_insert_binary
dm_hash_remove_binary
dm_hash_get_num_entries
dm_hash_iter
dm_hash_get_key
dm_hash_get_data
dm_hash_get_first
dm_hash_get_next
dm_set_selinux_context
dm_task_set_geometry
dm_split_lvm_name
dm_split_words
dm_snprintf
dm_basename
dm_asprintf
dm_report_init
dm_report_object
dm_report_output
dm_report_free
dm_report_get_private
dm_report_field_string
dm_report_field_int
dm_report_field_int32
dm_report_field_uint32
dm_report_field_uint64
dm_report_field_set_value
dm_report_set_output_field_name_prefix
dm_regex_create
dm_regex_match

View File

@ -0,0 +1,99 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
interface = @interface@
SOURCES =\
datastruct/bitset.c \
datastruct/hash.c \
libdm-common.c \
libdm-file.c \
libdm-deptree.c \
libdm-string.c \
libdm-report.c \
mm/dbg_malloc.c \
mm/pool.c \
regex/matcher.c \
regex/parse_rx.c \
regex/ttree.c \
$(interface)/libdm-iface.c
INCLUDES = -I$(interface)
LIB_STATIC = $(interface)/libdevmapper.a
ifeq ("@LIB_SUFFIX@","dylib")
LIB_SHARED = $(interface)/libdevmapper.dylib
else
LIB_SHARED = $(interface)/libdevmapper.so
endif
VERSIONED_SHLIB = libdevmapper.$(LIB_SUFFIX).$(LIB_VERSION)
DEFS += -DDM_DEVICE_UID=@DM_DEVICE_UID@ -DDM_DEVICE_GID=@DM_DEVICE_GID@ \
-DDM_DEVICE_MODE=@DM_DEVICE_MODE@
include ../make.tmpl
.PHONY: install_dynamic install_static install_include \
install_ioctl install_ioctl_static \
install_pkgconfig
INSTALL_TYPE = install_dynamic
ifeq ("@STATIC_LINK@", "yes")
INSTALL_TYPE += install_static
endif
ifeq ("@PKGCONFIG@", "yes")
INSTALL_TYPE += install_pkgconfig
endif
install: $(INSTALL_TYPE) install_include
install_include:
$(INSTALL) -D $(OWNER) $(GROUP) -m 444 libdevmapper.h \
$(includedir)/libdevmapper.h
install_dynamic: install_@interface@
$(LN_S) -f libdevmapper.$(LIB_SUFFIX).$(LIB_VERSION) \
$(libdir)/libdevmapper.$(LIB_SUFFIX)
install_static: install_@interface@_static
$(LN_S) -f libdevmapper.a.$(LIB_VERSION) $(libdir)/libdevmapper.a
install_ioctl: ioctl/libdevmapper.$(LIB_SUFFIX)
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< \
$(libdir)/libdevmapper.$(LIB_SUFFIX).$(LIB_VERSION)
install_pkgconfig:
$(INSTALL) -D $(OWNER) $(GROUP) -m 444 libdevmapper.pc \
$(usrlibdir)/pkgconfig/devmapper.pc
install_ioctl_static: ioctl/libdevmapper.a
$(INSTALL) -D $(OWNER) $(GROUP) -m 555 $(STRIP) $< \
$(libdir)/libdevmapper.a.$(LIB_VERSION)
$(VERSIONED_SHLIB): %.$(LIB_SUFFIX).$(LIB_VERSION): $(interface)/%.$(LIB_SUFFIX)
rm -f $@
$(LN_S) $< $@
.PHONY: distclean_lib distclean
distclean_lib:
$(RM) libdevmapper.pc
distclean: distclean_lib

Some files were not shown because too many files have changed in this diff Show More