Sophie

Sophie

distrib > * > 2010.0 > * > by-pkgid > 2acd34fd833ce754a3cc56ff5df10f83 > files > 184

heartbeat-2.1.3-7.3mdv2010.0.x86_64.rpm

<!DOCTYPE doctype PUBLIC "-//w3c//dtd html 4.0 transitional//en">
<html>
<head>
       
  <meta http-equiv="Content-Type"
 content="text/html; charset=iso-8859-1">
       
  <meta name="GENERATOR"
 content="Mozilla/4.77 [en] (X11; U; Linux 2.4.8 i686) [Netscape]">
  <title>Faq'n Tips</title>
</head>
  <body>
   
<center><b>Faq'n Tips</b></center>
   
<p><br>
  </p>
   
<ol>
    <li> <a href="#FAQ">Hey!&nbsp; This doesn't look like a FAQ!&nbsp; What 
gives?</a></li>
   <li><a href="#mailinglists">Are there mailing lists for Linux-HA?</a><br>
   </li>
    <li> <a href="#what_is_it">What is a cluster?</a></li>
    <li> <a href="#res_scr">What is a resource script?</a></li>
    <li> <a href="#mon">How to monitor various resources?</a>. 	If one of
my resources stops working heartbeat doesn't do anything 	unless the server
crashes.  How do I monitor resources with heartbeat? 	</li>
    <li><a href="#ipfail">If my one of my ethernet connections goes away
(cable severance, NIC failure, locusts), but my current primary node (the
one with the services) is otherwise fine, no one can get to my services and
I want to fail them over to my other cluster node. &nbsp;Is there a way to
do this?</a><br>
    </li>
    <li> <a href="#nettools">Every time my machine releases an IP alias,
it loses the whole interface (i.e. eth0)!&nbsp; How do I fix this?</a></li>
    <li> <a href="#manyIPs">I want a lot of IP addresses as resources (more 
than 8).&nbsp; What's the best way?</a></li>
    <li> <a href="#serial">The documentation indicates that a serial line
is a good idea, is there really a drawback to using two ethernet connections?</a></li>
    <li> <a href="#firewall">How to use heartbeat with ipchains firewall?</a></li>
    <li> <a href="#nolocalheartbeat">I got this message 	<tt>ERROR: No local
heartbeat. Forcing shutdown</tt> 	and then heartbeat shut itself down for
no reason at all!</a> 	</li>
    <li> <a href="#heavy_load">How to tune heartbeat on heavily loaded system 
	to avoid split-brain?</a></li>
    <li><a href="#serialerr">When heartbeat starts up I get this error message 
	in my logs:<br>
 	&nbsp; <tt>WARN: process_clustermsg: node [&lt;hostname&gt;] failed 	authentication</tt></a><br>
 	<a href="#serialerr">What does this mean?</a>   </li>
      <li><a href="#authkeys">When I try to start heartbeat i receive message:</a> 
	<a href="#authkeys"><tt>"Starting High-Availability services: Heartbeat 	failure
[rc=1]. Failed.</tt></a><br>
 	<a href="#authkeys">and there is nothing in any of the log files and no 
	messages. What is wrong ?</a> </li>
  <li>      <a href="#multiple_clusters">How to run multiple clusters on
same network segment ?</a></li>
    <li> <a href="#CVS">How to get latest CVS version of heartbeat ?</a></li>
    <li> <a href="#other_os">Heartbeat on other OSs.</a></li>
    <li><a href="#RPM">When I try to install the linux-ha.org heartbeat RPMs, 
they complain of dependencies from packages I already have installed!&nbsp; 
Now what?</a></li>
    <li><a href="#meatware">I don't want heartbeat to fail over the cluster 
automatically. &nbsp;How can I require human confirmation before failing over?</a></li>
   <li><a href="#STONITH">What is STONITH? &nbsp;And why might I need it?</a><br>
  </li>
   <li><a href="#config_stonith">How do I figure out what STONITH devices 
	are available, and how to configure them?</a><br>
  </li>
    <li><a href="#self_fence">I want to use a shared disk, but I don't want 
	to use STONITH. &nbsp;Any recommendations?</a><br>
    </li>
    <li><a href="#active_active">Can heartbeat be configured in an 	active/active
configuration?</a> 	If so, how do I do this, since the haresources file
is supposed 	to be the same on each box so I do not know how this could be
done.&nbsp;</li>
  <li><a href="#iftrunc">Why are my interface names getting truncated when
they're brought up and down?</a><br>
  </li>
    <li><a href="#why_auto_failback">What is this auto_failback parameter? 
	What happened to the old nice_failback parameter?</a>    </li>
    <li><a href="#auto_failback_upgrade">I am upgrading from a version of 
	Linux-HA which supported nice_failback to one that supports 	auto_failback.</a>
 How to I avoid a flash cut in this upgrade?    </li>
    <li> <a href="#last_hope">If nothing helps, what should I do ?</a></li>
   <li><a href="#patches">I want to submit a patch, how do I do that?</a><br>
   </li>
   
</ol>
   
<hr width="100%"> <br>
  &nbsp;  
<ol>
    <li> <a name="FAQ">Quit your bellyachin'!</a>&nbsp; We needed a "catch-all" 
document to supply useful information in a way that was easily referenced 
and would grow without a lot of work.&nbsp; It's closer to a FAQ than anything 
else.</li>
  <br>
   <li><a name="mailinglists">Yes! &nbsp;There are two public mailing lists
for Linux-HA.</a> &nbsp;You can find out about them by visiting <a
 href="http://linux-ha.org/contact/">http://linux-ha.org/contact/</a>.<br>
   </li>
  <br>
    <li> <a name="what_is_it">HA (High availability Cluster)</a> - 
 A cluster that allows a host (or hosts) to become Highly Available.  This
means that if one node goes down (or a service on that node goes down) another
node can pick up the service or node and take over from the failed machine.
    <a href="http://www.linux-ha.org">http://linux-ha.org</a> <br>
  Computing Cluster - This is what a Beowulf cluster is. It allows distributed 
computing over off the shelf components. In this case it is usually cheap 
IA32 machines. <a href="http://www.beowulf.org">http://www.beowulf.org/</a> 
    <br>
  Load balancing clusters - This is what the Linux Virtual Server project
does. In this scenario you have one machine with load balances requests to
a certain server (apache for example) over a farm of servers. <a
 href="http://www.linuxvirtualserver.org">www.linuxvirtualserver.org</a> 
   <br>
  All of these sites have howtos etc. on them. For a general overview on
clustering under Linux, look at the Clustering HOWTO.</li>
  <br>
    <li> <a name="res_scr">Resource scripts</a> are basically (extended)
System V init scripts. They must support stop, start, and status operations.&nbsp; 
In the future we will also add support for a "monitor" operation for monitoring 
services as you requested. The IPaddr script implements this new "monitor" 
operation now (but heartbeat doesn't use that function of it). For more info 
see Resource HOWTO.</li>
  <br>
    <li> <a name="mon">Heartbeat itself was not designed for monitoring</a>
 	various resources. 	If you need to monitor some resources (for example,
availability 	of WWW server) you need some third party software. 	Mon is
a reasonable solution.     <br>
     
    <ol type="A">
 	<li>Get Mon  from <a href="http://kernel.org/software/mon/">http://kernel.org/software/mon/</a>. 
 	</li>
      <li>Get all required modules listed. You can find them at 	nearest
mirror or at the CPAN archive (www.cpan.org). 	I am not very familiar with
Perl, so I downloaded them from CPAN 	archive as .tar.gz packages and installed
them in the usual way 	(perl Makefile.pl &amp;&amp; make &amp;&amp; make test &amp;&amp; 
	make install).           </li>
      <li>Mon is software for monitoring different network resources. 	It
can ping computers, connect to various ports, monitor WWW, 	MySQL etc. In
case of dysfunction of some resources it triggers 	some scripts.        
  </li>
      <li>Unpack mon in some directory. Best starting point is README file. 
	Complete documentation is in the &lt;dir&gt;/doc, where &lt;dir&gt; 	is the place
you unpacked mon package.           </li>
      <li>For a fast start do following steps:         
        <ol type="a">
  	<li>copy all subdirs found in &lt;dir&gt; to /usr/lib/mon </li>
  	<li>create dir /etc/mon </li>
  	<li>copy auth.cf from &lt;dir&gt;/etc to /etc/mon</li>
 	
        </ol>
           
        <p>Now, mon is prepared to work. You need to create your own mon.cf
file, 	where you should point to resources mon should watch and 	actions
mon will start in case of dysfunction and when resources 	are available again.
&nbsp; 	All monitoring scripts are in /usr/lib/mon/mon.d/. 	At the beginning
of every script you can find explanation how to use it. 	<br>
  	All alert scripts are placed in /usr/lib/mon/alert.d/. 	Those are scripts
triggered in case something went wrong. 	In case you are using ipvs on theirs
homepage 	(www.linuxvirtualserver.org) you can find scripts for adding and 
	removing servers from an ipvs list.</p>
    </li>
    </ol>
    </li>
  <br>
     <li>          
    <p><a name="ipfail">Yes! &nbsp;Use the ipfail plug-in.</a> &nbsp;For
    each interface you wish to monitor, specify one or more "ping" nodes or
    "ping groups" in your configuration.  &nbsp;Each node in your cluster
    will monitor these ping nodes or groups. &nbsp;Should one node detect a
    failure in one of these ping nodes, it will contact the other node in
    order to determine whether it or the ping node has the problem.
    &nbsp;If the cluster node has the problem, it will try to failover its
    resources (if it has any).</p>

    <p>To use ipfail, you will need to add the following to your /etc/ha.d/ha.cf 
files:<br>
  &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; respawn hacluster /usr/lib/heartbeat/ipfail<br>
  &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; ping &lt;IPaddr1&gt; &lt;IPaddr2&gt; 
... &lt;IPaddrN&gt;</p>
      
    <p>See <a href="http://pheared.net/devel/c/ipfail/">Kevin's      documentation</a>
for more details on the concepts.           </p>
    <p>IPaddr1..N are your ping nodes. &nbsp;<span
 style="font-weight: bold;">NOTE: &nbsp;</span>ipfail requires the auto_failback 
	option to be set to on or off (not legacy).<br>
      </p>
    </li>
  <br>
    <li>          
    <p><a name="nettools">This isn't a problem with heartbeat</a>, but rather 
is caused by various versions of net-tools.&nbsp; Upgrade to the most recent 
version of net-tools and it will go away.&nbsp; You can test it with ifconfig 
manually.</p>
    </li>
  <br>
    <li> <a name="manyIPs">Instead of failing over many IP addresses, just 
fail over one router address</a>.&nbsp; On your router, do the equivalent 
of "route add -net x.x.x.0/24 gw x.x.x.2", where x.x.x.2 is the cluster IP 
address controlled by heartbeat.&nbsp; Then, make every address within x.x.x.0/24 
that you wish to failover a permanent alias of lo0 on BOTH cluster nodes.&nbsp; 
This is done via "ifconfig lo:2 x.x.x.3 netmask 255.255.255.255 -arp" etc...</li>
  <br>
     <li> <a name="serial">If</a> anything makes your ethernet / IP stack
fail, you may lose both connections. You definitely should run the cables
differently, depending on how important your data is...</li>
  <br>
     <li> <a name="firewall">To make heartbeat work with ipchains</a>, you
must accept incoming and outgoing traffic on 694 UDP port. Add something
like     <br>
  /sbin/ipchains -A output -i ethN -p udp -s &lt;source_IP&gt; -d &lt;dest_IP&gt;&nbsp; 
-j ACCEPT <br>
  /sbin/ipchains -A input -i ethN -p udp -s &lt;source_IP&gt; -d &lt;dest_IP&gt;&nbsp; 
-j ACCEPT</li>
  <br>
    <li> <a name="nolocalheartbeat"> 	This can be caused by one of two things:
    <ul>
 	<li>System under heavy I/O load, or</li>
 	<li>Kernel bug.</li>
 	
    </ul>
    </a> 	For how to deal with the first occurrence (heavy load), please
read the 	answer to the <a href="#heavy_load">next FAQ item.</a>
    <p> 	If your system was not under moderate to heavy load when it 	got
this message, you probably have the kernel bug. 	The 2.4.18 Linux kernel
had a bug in it which would cause it to not 	schedule heartbeat for very
long periods of time when the system was 	idle, or nearly so.  If this is
the case, you need to get a kernel 	that isn't broken. 	     </p>
  </li>
  <li> <a name="heavy_load"> 	"No local heartbeat" or "Cluster node returning
after partition" 	under heavy load is typically caused by too small a deadtime
interval. 	</a> 	Here is suggestion for how to tune deadtime: <br>
 	
    <ul>
  	<li>Set deadtime to 60 seconds or higher </li>
  	<li>Set warntime to whatever you *want* your deadtime to be. </li>
  	<li>Run your system under heavy load for a few weeks. </li>
  	<li> Look at your logs for the longest time either system went 		without
hearing a heartbeat.</li>
 	<li>Set your deadtime to 1.5-2 times that amount.&nbsp;</li>
 	<li>Set warntime to a little less than that amount. </li>
 	<li>Continue to monitor logs for warnings about long heartbeat times. 		If
you don't do this, you may get 		"Cluster node ... returning after partition" 
		which will cause heartbeat to restart on all machines 		in the cluster.
 This will almost certainly annoy you. 		</li>
 	
    </ul>
 	Adding memory to the machine generally helps. 	Limiting workload on the
machine generally helps. 	Newer versions of heartbeat are a better about
this than 	pre 1.0 versions.</li>
  <br>
     <li><a name="serialerr">It's common to get a single mangled packet 	on
your serial interface when heartbeat starts up.</a> 	&nbsp;This message is an indication
that we received a mangled packet. &nbsp;It's harmless in 	this scenario.  If it happens continually,
there is probably 	something else going on. 	<br>
    </li>
  <br>
     <li> <a name="authkeys">It's probably a permissions problem on authkeys.</a>&nbsp; 
It wants it to be read only mode (400, 600 or 700).&nbsp; Depending on where 
and when it discovers the problem, the message will wind up in different places.
    <br>
  But, it tends to be in          
    <ol>
        <li>stdout/stderr</li>
        <li>wherever you specified in your setup</li>
        <li>/var/log/messages</li>
     
    </ol>
  Newer releases are better about also putting out startup messages to stderr 
in addition to wherever you have configured them to go. </li>
  <br>
     <li> <a name="multiple_clusters">Use multicast</a> and give each its
own multicast group. If you need to/want to use broadcast, then run each
cluster on different port numbers. &nbsp;An example of a configuration using
multicast would be to have the following line in your ha.cf file:<br>
  &nbsp; &nbsp; &nbsp;<span style="font-style: italic;">mcast eth0 224.1.2.3 
694 1 0</span><br>
  This sets eth0 as the interface over which to send the multicast, 224.1.2.3 
as the multicast group (will be same on each node in the same cluster), udp 
port 694 (heartbeat default), time to live of 1 (limit multicast to local 
network segment and not propagate through routers), multicast loopback disabled 
(typical).</li>
  <br>
     <li><a name="CVS">There is a CVS repository for Linux-HA.</a> You can
find 	it at cvs.linux-ha.org.&nbsp; Read-only access is via login guest, 	password
guest, module name linux-ha. 	More details are to be found in the <a
 href="http://lists.community.tummy.com/pipermail/linux-ha-dev/1999-October/000212.html">announcement
email</a>.&nbsp; 	It is also available through the web using viewcvs at <a
 href="http://cvs.linux-ha.org/viewcvs/viewcvs.cgi/linux-ha/">http://cvs.linux-ha.org/viewcvs/viewcvs.cgi/linux-ha/</a></li>
  <br>
     <li> <a name="other_os">Heartbeat now uses use automake</a>        
and is generally quite portable at this point.  Join 	the Linux-HA-dev mailing
list if you want to help port it 	to your favorite platform. 	</li>
  <br>
     <li><a name="RPM">Due to distribution RPM package name differences, 	this
was unavoidable.</a>&nbsp; If you're not using STONITH, 	use the "--nodeps"
option with rpm. &nbsp;Otherwise, use the 	heartbeat source to build your
own RPMs.&nbsp; You'll have the 	added dependencies of autoconf &gt;= 2.53
and libnet 	(get it from <a href="http://www.packetfactory.net/libnet">http://www.packetfactory.net/libnet</a>).&nbsp;
 	Use the heartbeat source RPM (preferred) or unpack the heartbeat 	source
and from the top directory, run "./ConfigureMe rpm".&nbsp; 	This will build
RPMS and place them where it's customary for your 	particular distro. &nbsp;It
may even tell you if you are missing 	some other required packages!</li>
  <br>
     <li><a name="meatware">You configure a "meatware" STONITH device into 
	the ha.cf file.</a> &nbsp;The meatware STONITH device asks the 	operator
to go power reset the machine which has gone down.&nbsp; 	When the operator
has reset the machine he or she then issues a 	command to tell the meatware
STONITH plug-in that the reset has  	taken place. &nbsp;Heartbeat will wait
indefinitely until 	the operator acknowledges the reset has occurred. &nbsp;During 
	this time, the resources will not be taken over, and 	nothing will happen.</li>
  <br>
    <li><a name="STONITH">STONITH is a form of fencing</a>, and is an acronym 
	standing for Shoot The Other Node In The Head. &nbsp;It allows one 	node
in the cluster to reset the other. &nbsp;Fencing is essential 	if you're
using shared disks, in order to protect the integrity of 	the disk data.
&nbsp;Heartbeat supports STONITH fencing, and 	resources which are self-fencing.
&nbsp;You need to configure 	some kind of fencing whenever you have a cluster
resource 	which might be permanently damaged if both machines tried to make 
	it active at the same time. &nbsp;When in doubt check with the 	Linux-HA
mailing list.    </li>
  <br>
     <li><a name="config_stonith">To get the list of supported STONITH 	devices</a>,
issue this command: 	<br>
    <tt>stonith -L</tt><br>
 	To get all the gory details on exactly what these STONITH device 	names
mean, and how to configure them, issue this command: 	<br>
    <tt>stonith -h</tt><br>
 	</li>
     <li><a name="self_fence">This is not something which heartbeat supports 
	directly, however, there are a few kinds of resources which 	are "self-fencing".&nbsp;</a> 
	This means that activating the resource causes it to fence itself 	off from
the other node naturally. &nbsp;Since this fencing 	happens in the resource
agent, heartbeat doesn't know (and 	doesn't have to know) about it. &nbsp;Two
possible hardware candidates 	are IBM's ServeRAID-4 RAID controllers and
ICP 	Vortex RAID controllers - but do your homework!!! &nbsp; 	When in doubt
check with the mailing list.    </li>
  <br>
     <li><a name="active_active"> 	<b>Yes</b>, heartbeat has supported active/active
configurations since 	its first release.</a> 	The key to configuring active/active
clusters is to understand 	that each resource group in the haresources file
is preceded by 	the name of the server which is normally supposed to run
that service. 	When in a "auto_failback yes (or legacy)" 	(or old-style "nice_failback
off") configuration, when a cluster node 	comes up, it will take over any
resources for which it is listed 	as the "normal master" in the haresources
file.  Below is an example 	of how to do this for an apache/mysql configuration. 
    <pre>server1	10.10.10.1 mysql<br>server2 10.10.10.2 apache<br></pre>
 	In this case, the IP address 10.10.10.1 should be replaced with 	the IP
address you want to contact the mysql server at, and 	10.10.10.2 should be
replaced with the IP address you want 	people to use to contact the web server. 
	Any time server1 is up, it will run the mysql service.  Any time 	server2
is up, it will run the apache service. 	If both server1 and server2 are up,
both servers will be active. 	Note that this is contradictory with the old 
	<kbd>nice_failback on</kbd> parameter. 	With the new release which supports
    <kbd>hb_standby foreign</kbd>, 	you can manually fail back into an active/active
configuration 	if you have <kbd>auto_failback off</kbd>.  This allows administrators 
	more flexibility in failing back in a more customized way 	at more safe
or convenient times.<br>
    <br>
  </li>
  <li><a name="iftrunc"></a>Heartbeat was written to use ifconfig to manage
its interfaces. &nbsp;That's nice for portability for other platforms, but
for some reasons ifconfig truncates interface names. &nbsp;If you want to
have fewer than 10 aliases, then you need to limit your interface names to
7 characters, and 6 for fewer than 100 interfaces.<br>
  </li>
  <br>
     <li><a name="why_auto_failback">The <b>auto_failback</b> parameter 	is
a replacement for the old <b>nice_failback</b> parameter.</a> 	The old value
    <b>nice_failback on</b> is replaced by 		<b>auto_failback off</b>. 	The
old value <b>nice_failback off</b> is logically replaced by 		the new <b>auto_failback
on</b> parameter. 		Unlike the old <b>nice_failback off</b> behavior, 		the
new <b>auto_failback on</b> allows the use 		of the ipfail and hb_standby
facilities. 	
    <p>During upgrades from nice_failback to auto_failback, it is 		sometimes
necessary to set <b>auto_failback</b> to 		<b>legacy</b>, as described in
the    		<a href="#auto_failback_upgrade">upgrade procedure</a> 		below.</p>
    </li>
      <li><a name="auto_failback_upgrade"> 	To upgrade from a pre-auto_failback
version of heartbeat to one 	which supports auto_failback, the following
procedures are 	recommended to avoid a flash cut on the whole 	cluster.</a> 
	
    <ol>
 	<li>Stop heartbeat on one node in the cluster.</li>
 	<li>upgrade this node.  If the other node has 		<b>nice_failback on</b>
in ha.cf then set 		<b>auto_failback off</b> in the new ha.cf file. 		If
the other node in the cluster has 		<b>nice_failback off</b> then set 		<b>auto_failback
legacy</b> in the new ha.cf file. 	</li>
      <li>Start the new version of heartbeat on this node.</li>
 	<li>Stop heartbeat on the other node in the cluster.</li>
 	<li>upgrade this second node in the cluster with the new 		version of heartbeat.
 Set auto_failback 		the same as it was set in the previous step. 		</li>
 	<li>Start heartbeat on this second node in the cluster.</li>
 	<li>If you set <b>auto_failback</b> to <b>on</b> or 		<b>off</b>, then
you are done.  Congratulations! 		</li>
 	<li>If you set <b>auto_failback legacy</b> in your ha.cf file, 		then continue
as described below... 		</li>
 	<li>Schedule a time to shut down the entire 		cluster for a few seconds. 
		</li>
 	<li>At the scheduled time, stop both nodes in the cluster, 		and then change
the value of <b>auto_failback</b> 		to <b>on</b> in the ha.cf file on both
sides. 		</li>
 	<li>Restart both nodes on the cluster at about the same 		time. 		</li>
 	<li>Congratulations, you're done! 		You can now use ipfail, and can also 
		use the <i>hb_standby</i> command 		to cause manual resource moves.<br>
        <br>
 		</li>
 	
    </ol>
 	</li>
     <li> <a name="last_hope">Please be sure that you read all documentation</a> 
	and searched mail list archives. If you still can't find a 	solution you
can post questions to the mailing list. Please 	include following:      
    <ul>
      <li> What OS are you running.</li>
      <li>What version (distro/kernel).</li>
      <li>How did you install heartbeat (tar.gz, rpm, src.rpm or 	manual
installation)</li>
      <li>Include your configuration files from BOTH machines. 	You can omit
authkeys.</li>
      <li>Include the parts of your logs which describe the errors.&nbsp; Send them 
	as text/plain attachments. <br>
  	Please don't send "cleaned up" logs.&nbsp; The real logs have 	more information
in them than cleaned up versions.&nbsp; Always 	include at least a little
irrelevant data before and after the events 	in question so that we know
nothing was missed.&nbsp; Don't edit 	the logs unless you really have some
super-secret high-security 	reason for doing so.       
        <p> This means you need to attach 6 or 8 files.  Include 6 if your
debug 	output goes into the same file as your normal output and 8 otherwise. 
	For each machine you need to send: </p>
               
        <ul>
          <li>ha.cf</li>
          <li>haresources</li>
          <li>normal logs</li>
          <li>debug logs (perhaps)</li>
               
        </ul>
       </li>
    </ul>
  </li>
  <br>
    <li><a name="patches">We love to get good patches. &nbsp;Here's the preferred
way:</a>   
    <ul>
     <li>If you have any questions about the patch, please check with the 
linux-ha-dev mailing list for answers before starting.</li>
     <li>Make your changes against the current CVS source</li>
     <li>Test them, and make sure they work ;-)</li>
     <li>Produce the patch this way:<br>
 &nbsp; &nbsp; &nbsp; &nbsp;<b><tt>cvs -q diff -u &gt;patchname.txt</tt></b></li>
     <li>Send an email to the linux-ha-dev mailing list with the patch as 
a [text/plain] attachment. 	If your mailer wants to zip it up for you, please
fix it.<br>
       </li>
   
    </ul>
  </li>
  <br>
   
</ol>
   
<hr width="100%">  
<p>Rev 0.0.8 <br>
  (c) 2000 Rudy Pawul <a href="mailto:rpawul@iso-ne.com">rpawul@iso-ne.com</a> 
<br>
  (c) 2001 Dusan Djordjevic <a href="mailto:dj.dule@linux.org.yu">dj.dule@linux.org.yu</a> 
 (c) 2003 IBM (Author Alan Robertson <a href="mailto:alanr@unix.sh">alanr@unix.sh)</a> 
</p>
  <br>
 <br>
</body>
</html>