**********************************************************************

              Upgrading and Installing on Cluster Nodes
                    Release Notes, Part 4 of 4
                              Beta 2

**********************************************************************

         (c) 2001 Microsoft Corporation. All rights reserved.


These notes support a preliminary release of a software program that
bears the project code name Whistler.

With Whistler Datacenter Server, you can use clustering to ensure
that users have constant access to important server-based resources.
With clustering, you create several cluster nodes that appear to 
users as one server. If one of the nodes in the cluster fails, another 
node begins to provide service (a process known as failover). 
Mission critical applications and resources remain continuously 
available.

Sections to read if you are upgrading:
1.0     Upgrading or Installing Clustering
1.2     Options for Upgrading or Installing Clustering
2.0     Upgrading a Cluster from Windows 2000 to Whistler
2.1     How Rolling Upgrades Work
2.2     Restrictions on Rolling Upgrades
2.3     Resource Behavior During Rolling Upgrades
2.4     Alternatives to Rolling Upgrades from Windows 2000

Sections to read if you are performing a new installation:
1.0     Upgrading or Installing Clustering
1.2     Options for Upgrading or Installing Clustering
3.0     Installation on Cluster Nodes

======================================================================
1.0   Upgrading or Installing Clustering
======================================================================

Before installing or upgrading clustering, you should familiarize 
yourself with the basic preparations needed and the options available 
for upgrading and installing. The following sections provide 
information on these topics.

1.1   Preparing for Upgrading or Installing Clustering
======================================================================

To prepare for installing or upgrading clustering, review the
sections earlier in this text file series. As described in those
sections, check the Hardware Compatibility List to ensure that all
your hardware (including your cluster storage) is compatible with
Whistler Datacenter Server. In addition, check with the manufacturer
of your cluster storage hardware to be sure you have the drivers you
need in order to use the hardware with Whistler Datacenter Server.

1.2   Options for Upgrading or Installing Clustering
======================================================================

When installing or upgrading clustering, you can choose among several options. You can:

 * Upgrade a cluster that is running Windows 2000, possibly
   through a rolling upgrade. For more information, see "How
   Rolling Upgrades Work" and "Restrictions on Rolling Upgrades"
   later in this text file.

 * Perform a new installation of Whistler Datacenter Server and
   install Cluster service at the same time. For important
   information about preparing for cluster installation,
   see "Installation on Cluster Nodes" later in this text file.

Note: For cluster disks, you must use the NTFS file system and 
configure the disks as basic disks. You cannot configure cluster disks 
as dynamic disks, and you cannot use features of dynamic disks such as 
spanned volumes (volume sets). For more information about the 
limitations of server clusters, see Whistler Help and Support 
Services. To open Help and Support Services, after completing Setup, 
click Start, and then click Help.

For more information about reinstalling clustering on one of the 
cluster nodes, see Whistler Help and Support Services.


======================================================================
2.0   Upgrading a Cluster from Windows 2000 to Whistler
======================================================================

If you are upgrading from Windows 2000 to Whistler on cluster nodes,
you might be able to perform a rolling upgrade of the operating
system. In a rolling upgrade, you sequentially upgrade the operating
system on each node, making sure that one node is always available to
handle client requests. When you upgrade the operating system, the
Cluster service is automatically upgraded also. A rolling upgrade
maximizes availability of clustered services and minimizes
administrative complexity. For more information, see the following 
section, "How Rolling Upgrades Work."

To determine whether you can perform a rolling upgrade and understand 
the effect that a rolling upgrade might have on your clustered 
resources, see "Restrictions on Rolling Upgrades" later in this text 
file series. For information about ways to upgrade your cluster nodes 
if you cannot perform a rolling upgrade, see "Alternatives to Rolling 
Upgrades from Windows 2000" later in this text file series.

2.1   How Rolling Upgrades Work
======================================================================

This section describes rolling upgrades on server clusters. For
information about methods, restrictions, and alternatives to rolling
upgrades, see the following sections.

There are two major advantages to a rolling upgrade. First, there is
a minimal interruption of service to clients. (However, server
response time might decrease during the phases in which one node 
handles the work of the entire cluster.) Second, you do not have to 
recreate your cluster configuration. The configuration remains intact 
during the upgrade process.

A rolling upgrade starts with two cluster nodes that are running
Windows 2000. In this example, they are named Node 1 and Node 2.

Phase 1: Preliminary
Each node runs Windows 2000 Datacenter Server with the following
hardware and software:

 * A cluster storage unit using Fibre Channel, not SCSI. Fibre
   Channel is the only type of cluster storage on the Hardware
   Compatibility List for Datacenter Server. (Note that SCSI can be
   used for a two-node cluster with Advanced Server, not Datacenter
   Server.) 

 * The Cluster service component (one of the optional components of
   Windows 2000 Datacenter Server).

 * Applications that support a rolling upgrade. For more information,
   see the product documentation and "Resource Behavior During
   Rolling Upgrades" later in this text file.

At this point, your cluster is configured so that each node handles
client requests (an active/active configuration).

Phase 2: Upgrade Node 1
Node 1 is paused, and Node 2 handles all cluster resource groups while
you upgrade the operating system of Node 1 to Whistler Datacenter
Server.

Phase 3: Upgrade Node 2 
Node 1 rejoins the cluster. Node 2 is paused and Node 1 handles all
cluster resource groups while you upgrade the operating system on 
Node 2.

Phase 4: Final 
Node 2 rejoins the cluster, and you redistribute the resource groups
back to the active/active cluster configuration.

Important: For cluster disks, you must use the NTFS file system and 
configure the disks as basic disks. You cannot configure cluster disks 
as dynamic disks, and you cannot use features of dynamic disks such as 
spanned volumes (volume sets).

2.1.1   Performing a Rolling Upgrade
----------------------------------------------------------------------

For an outline of the rolling upgrade process, see the preceding
section, "How Rolling Upgrades Work."

Important: For information about what resources are supported during 
rolling upgrades, see "Restrictions on Rolling Upgrades" and "Resource 
Behavior During Rolling Upgrades" later in this text file series.

>>> To perform a rolling upgrade: 

1. In Cluster Administrator, click the node that you want to upgrade
   first.

2. On the File menu, click Pause Node.

3. In the right pane, double-click Active Groups.

4. In the right pane, click a group, and then on the File menu, click
   Move Group. Repeat this step for each group listed.

   The services will be interrupted during the time they are being
   moved and restarted on another node. After the groups are moved,
   one node is idle, and the other nodes handle all client
   requests.

5. Use Whistler Datacenter Server Setup to upgrade the paused node
   from Windows 2000. For information about running Setup, see
   sections earlier in this text file series.

   Setup detects the earlier version of clustering on the paused node
   and automatically installs clustering for Whistler Datacenter
   Server. The node automatically rejoins the cluster at the end of
   the upgrade process, but is still paused and does not handle any
   cluster-related work.

6. To verify that the node that was upgraded is fully functional,
   perform validation tests on it.

7. In Cluster Administrator, click the node that was paused, and then
   on the file menu, click Resume Node.

8. Repeat the preceding steps for any remaining node or nodes.

2.2   Restrictions on Rolling Upgrades
======================================================================

There are several basic restrictions to the rolling-upgrade process.
They involve the beginning of Phase 3, in which you operate a mixed-version cluster: a cluster in which the nodes run different versions
of the operating system. For a mixed-version cluster to work, the
different versions of the software running on each node must be
prepared to communicate with one another. This requirement leads to
several basic restrictions on the rolling-upgrade process.

 * For a successful rolling upgrade, every resource that the cluster
   manages must be capable of a rolling upgrade. For more
   information, see "Resource Behavior During Rolling Upgrades"
   later in this text file.

 * During the mixed-version phase of a rolling upgrade, when the
   cluster nodes are running different versions of the operating
   system, do not change the settings of resources (for example, do
   not change the settings of a printer resource).

If preceding restrictions cannot be met, do not perform a rolling
upgrade. For more information, see "Alternatives to Rolling Upgrades
from Windows 2000" later in this text file.

2.2.1   Operation of New Resource Types in Mixed-Version Clusters
----------------------------------------------------------------------

If a resource type that you add to the cluster is supported in one 
version of the operating system but not in the other, the operation of 
a mixed-version cluster is complicated. For example, Cluster service
in Whistler (part of the Advanced Server and Datacenter Server
products) supports the Generic Script resource type. However, older
versions of Cluster service do not support it. A mixed-version
cluster can run a Generic Script resource on a node running Whistler
but not on a node running Windows 2000.

Cluster service transparently sets the possible owners of new resource 
types to prevent these resources from failing over to a Windows 2000 
node of a mixed-version cluster. In other words, when you view the 
possible owners of a new resource type, a Windows 2000 node will not 
be in the list, and you will not be able to add this node to the list. 
If you create such a resource during the mixed-version phase of a 
rolling upgrade, the resource groups containing those resources will 
not fail over to a Windows 2000 node.

2.3   Resource Behavior During Rolling Upgrades
======================================================================

Although Cluster service supports rolling upgrades, not all
applications have seamless rolling-upgrade behavior. The following
table describes which resources will be supported during a rolling
upgrade. If you have a resource that is not fully supported during
rolling upgrades, see "Alternatives to Rolling Upgrades from 
Windows 2000" later in this text file.

RESOURCE           ROLLING UPGRADE NOTES
--------------     ------------------------------------------------
DHCP               Supported during rolling upgrades.
File Share         Supported during rolling upgrades.
IP Address         Supported during rolling upgrades.
Network Name       Supported during rolling upgrades.
NNTP               Supported during rolling upgrades.
Physical Disk      Supported during rolling upgrades
Time Service       Supported during rolling upgrades.
SMTP               Supported during rolling upgrades.
WINS               Supported during rolling upgrades.
Print Spooler      The only Print Spooler resources supported
                   during a rolling upgrade are those on LPR ports
                   or standard monitor ports. See the following
                   section, "Upgrades that Include a Print Spooler
                   Resource."
IIS                Internet Information Server (IIS) 6.0 is not
                   supported during rolling upgrades. For more
                   information, see "Upgrades the include an IIS
                   resource" later in this text file.
MS DTC             Microsoft Distributed Transaction
                   Coordinator is not supported during a rolling
                   upgrade. However, you can perform a process
                   similar to rolling upgrades. See "Upgrades that
                   Include an MS DTC Resource" later in this text file.
MSMQ               Microsoft Message Queuing is not supported
                   during a rolling upgrade. To upgrade a cluster
                   which includes MSMQ, see "Upgrades that Include
                   an MSMQ Resource" later in this text file.
Other resource     See Readme.doc in the root
types              directory of the Whistler Datacenter Server CD.
                   Also see the product documentation that comes
                   with the application or resource.

2.3.1   Upgrades that Include a Print Spooler Resource
----------------------------------------------------------------------

If you want to perform a rolling upgrade of a cluster that has a
Print Spooler resource, you must consider two issues.

First, the Print Spooler resource only supports upgrades (including
rolling upgrades or any other kind of upgrade) on printers on 
cluster-supported ports (LPR or Standard Monitor ports). For 
information about what to do if your printer is not supported, see 
"Alternatives to Rolling Upgrades from Windows 2000" later in this 
text file series.

Second, when you operate a mixed-version cluster including a Print
Spooler resource, note the following:

 * Do not change printer settings in a mixed-version cluster with a
   Print Spooler resource.

 * If you add a new printer, when you install the drivers for that
   printer, be sure to install both the driver for Windows 2000 and
   the driver for Whistler on all nodes. 

 * If printing preferences or defaults are important, be sure to
   check them. Printing preferences in Whistler won't necessarily
   correspond to document defaults for the same printer in Windows
   2000. This can be affected by differences in the drivers for the
   two operating systems.

When the rolling upgrade is complete and both cluster nodes are
running the updated operating system, you can make any modifications
you choose to your printer configuration.

2.4   Alternatives to Rolling Upgrades from Windows 2000
======================================================================

Certain resources are not supported during rolling upgrades,
including: 

   * Internet Information Server (IIS)

   * Microsoft Data Transaction Coordinator (MS DTC)

   * Microsoft Message Queuing (MSMQ)

Special procedures, described below, must be followed when performing
an upgrade of a cluster that contains these resources. In addition to
the three resource types above, you might also have other resources that are not supported during rolling upgrades. Be sure to read 
Readme.doc in the root directory of the Whistler CD, as well as the 
product documentation that comes with the application or resource.

2.4.1   Upgrades that Include an IIS Resource
----------------------------------------------------------------------

IIS 6.0 is not supported during rolling upgrades. With earlier
versions of IIS, you could configure an individual Web site to fail
over as a cluster resource. However with IIS 6.0, the entire IIS
service must fail over, not individual Web sites. If you have
individual Web sites or the IIS service configured as a cluster
resource, you must use the following procedure to upgrade to Whistler.

>>> To upgrade from Windows 2000 on a cluster that includes an IIS 
resource:

1. Remove any individual Web sites that you have configured as
   cluster resources from their cluster group. You can no longer
   designate a single site as a cluster resource.  

2. If you have the IIS service configured as a cluster resource, take
   this resource offline. To take the resource offline, follow the
   procedures described in "Upgrades for Other Non-Supported
   Resources" later in this text file.

3. Perform a rolling upgrade, as described in the procedure "To 
   perform a rolling upgrade" earlier in this text file. 

4. Once you have completed the upgrade, you can bring the IIS service
   back online.

Important: With IIS 6.0, you can only configure the IIS service as a 
cluster resource. You cannot configure individual Web sites as cluster 
resources.

2.4.2   Upgrades that Include an MS DTC Resource
----------------------------------------------------------------------

Microsoft Distributed Transaction Coordinator (MS DTC) is not 
Supported during rolling upgrades. However, you can perform a process 
similar to a rolling upgrade.

>>> To upgrade from Windows 2000 on a cluster that includes an MS DTC 
resource:

1. Take the MS DTC resource offline by using the Cluster Administrator 
   and clicking the Resources folder. In the details pane, click the 
   MS DTC resource, then on the File menu, click Take Offline. 

   Caution: Taking a resource offline causes all resources that depend 
   on that resource to be taken offline.

2. Configure the MS DTC resource so that the only allowable owner
   is the node it is currently on by using the Cluster
   Administrator and clicking the Resource folder. In the details
   pane, click the MS DTC resource. On the File menu, click
   Properties. On the General tab, next to Possible owners, click
   Modify. Specify Node 2 as an Available node, and if necessary,
   remove Node 1 from the Available nodes list.

3. Upgrade a node that does not contain the MS DTC resource from 
   Windows 2000 to Whistler. For general information about Setup, 
   review the sections earlier in this text file series.

4. Move the MS DTC resource to the upgraded nodes, following the
   procedures as described in step 1.

5. Configure the MS DTC resource so that the only allowable owner
   is the upgraded node, following the procedures as described in
   step 2.

6. Upgrade the remaining nodes from Windows 2000 to Whistler.  

7. Configure the allowable owners for the MS DTC resource as 
   appropriate for your configuration.

8. Manually restart all dependent services, and then bring the MS DTC
   resource back online by using the Cluster Administrator
   and clicking the Resources folder. In the details pane, click
   the MS DTC resource, and then on the File menu, click Bring Online.

2.4.3   Upgrades That Include an MSMQ Resource
----------------------------------------------------------------------

Microsoft Message Queuing (MSMQ) does not support rolling upgrades. 
The MSMQ resource is dependent on the MS DTC resource, so be sure to
follow the steps outlined in the preceding section "Upgrades that
Include an MS DTC Resource." 

>>> To upgrade from Windows 2000 on a cluster that includes an MSMQ 
resource:

1. Upgrade the operating system of the nodes to Whistler.  

2. Click Start, point to Programs, point to Administrative Tools, and
   then click Configure Your Server.

3. In Configure Your Server, click Finish Setup, and then click
   Configure Message Queuing Cluster Resources.

4. Follow the instructions that appear in the Configure Message
   Queuing Cluster Resources Wizard.

2.4.4   Upgrades for Other Non-Supported Resources 
----------------------------------------------------------------------

If you have other resources on your cluster that are not supported
during a rolling upgrade, but are not described above, take those
resources offline prior to performing the rolling upgrade. 

>>> To take a resource offline and perform a rolling upgrade:

1. Confirm that your systems are running Windows 2000.

2. Using the information in "Resource Behavior During Rolling
   Upgrades" earlier in this text files, list the resources
   in your cluster that are not supported during rolling upgrades.

3. In Cluster Administrator, click the Resources folder.

4. In the right pane, click the resource you want.

5. On the File menu, click Take Offline.

6. Repeat the preceding steps until you have taken offline all
   resources that do not support rolling upgrades.

7. Perform a rolling upgrade, as described in the procedure "To 
   perform a rolling upgrade" earlier in this text file series.

8. For each resource that you listed in step 2, follow the
   product's instructions for installing or reconfiguring the
   application so that it will run with Whistler.


======================================================================
3.0   Installation on Cluster Nodes
======================================================================

The following sections provide important information about how to
prepare for cluster installation, begin hardware installation for a
cluster, and start Setup on the first cluster node.

3.1   Planning and Preparing for Cluster Installation
======================================================================

Before carrying out cluster installation, you will need to plan 
hardware and network details.

Caution: Make sure that Datacenter Server and Cluster service are 
installed and running on one node before starting the operating system 
on another node. If the operating system is started on multiple nodes 
before Cluster service is running on one node, the cluster storage 
could be corrupted. Once Cluster service is  running properly on one 
node, the other nodes can be installed and configured simultaneously. 
Each node of your cluster must be running Datacenter Server. 

In your planning, review the following items:

* Cluster hardware and drivers.

  Check that your hardware, including your cluster storage and 
  other cluster hardware, is compatible with Whistler Datacenter 
  Server. To check this, see the Hardware Compatibility List (HCL) on 
  The Whistler CD, in the Support folder, in Hcl.txt. For the most 
  up-to-date list of supported hardware, see the Hardware 
  Compatibility List by visiting the Microsoft Web site at: 

  http://www.microsoft.com/

  You must have a separate PCI storage host adapter (SCSI or Fiber
  Channel) for the shared disks. This is in addition to the bootdisk 
  adapter.

  Also check that you have the drivers you need in order to use the
  cluster storage hardware with Whistler Datacenter Server. (Drivers 
  are available from your hardware manufacturer.)

  Review the manufacturer's instructions carefully before you begin
  installing cluster hardware. Otherwise, the cluster storage could 
  be corrupted. 

  To simplify configuration and eliminate potential compatibility
  problems, consider using identical hardware for all nodes.

* Network adapters on the cluster nodes.

  In your planning, decide what kind of communication each network
  adapter will carry.

  Note: To reduce the risks with having a single point of failure, 
  plan on having two or more network adapters in each cluster node, 
  and connecting each adapter to a physically separate network. The 
  adapters on a given node must connect to networks using different 
  subnet masks.

The following table shows recommended ways of connecting network
adapters:

ADAPTERS   
PER NODE   RECOMMENDED USE
--------   --------------------------------------------------------
2          One private network (node-to-node only), plus
           one mixed network (node-to-node plus client-to-cluster).
3          Two private networks (node-to-node), plus
           one public network (client-to-cluster).
           With this configuration, the adapters using the private
           network must use static IP addresses (not DHCP).
           or
           One private network (node-to-node), plus
           one public network (client-to-cluster), plus
           one mixed network (node-to-node plus client-to-cluster).

The following list provides more details about the types of
communication that an adapter can carry:

 * Only node-to-node communication (private network).

   This implies that the server has one or more additional adapters to 
   carry other communication.

   For node-to-node communication, you will connect the network
   adapter to a private network used exclusively within the
   cluster. Note that if the private network uses a single hub or
   network switch, that piece of equipment becomes a potential
   point of failure in your cluster.

   The nodes of a cluster must be on the same subnet, but you can use
   virtual LAN (VLAN) switches on the interconnects between two
   nodes. If you use a VLAN, the point-to-point, round-trip latency
   must be less than 1/2 second and the link between two nodes must
   appear as a single point-to-point connection from the perspective 
   of the operating system. To avoid single points of failure, use 
   independent VLAN hardware for the different paths between the 
   nodes.

   If your nodes use multiple private (node-to-node) networks, the
   adapters for those networks must use static IP addresses (not
   DHCP).

 * Only client-to-cluster communication (public network).

   This implies that the server has one or more additional adapters to 
   carry other communication.

 * Both node-to-node and client-to-cluster communication (mixed
   network).

   If you have only one network adapter per node, it must
   carry both these kinds of communication. If you have multiple
   network adapters per node, a network adapter that carries both
   kinds of communication can provide backup for other network
   adapters.

 * Communication unrelated to the cluster.

   If a clustered node also provides services unrelated to the 
   cluster, and there are enough adapters in the cluster node, you 
   might want to use one adapter for carrying communication unrelated 
   to the cluster.

Consider choosing a name for each connection that describes its 
purpose. The name will make it easier to identify the connection 
whenever you are configuring the server.

* Cluster IP address.

  Obtain a static IP address for the cluster itself. You cannot use
  DHCP for this address.

* IP addressing for cluster nodes.

  Determine how to handle the IP addressing for the cluster nodes. 
  Each network adapter on each node will need IP addressing. You can 
  provide IP addressing through DHCP, or you can assign each network 
  adapter a static IP address. If you use static IP addresses, the 
  addresses for each linked pair of network adapters (linked 
  node-to-node) should be on the same subnet.

  Note: If you use DHCP for the cluster nodes, it can act as a
  single point of failure. That is, if you set up your cluster nodes 
  so that they depend on a DHCP server for their IP addresses, 
  temporary failure of the DHCP server can mean temporary 
  unavailability of the cluster nodes. When deciding whether to use 
  DHCP, evaluate ways to ensure availability of DHCP services, and 
  consider the possibility of using long leases for the cluster nodes. 
  This will help ensure that they always have a valid IP address.

* Cluster name.

  Determine or obtain an appropriate name for the cluster. This is 
  the name administrators will use for connections to the cluster. 
  (The actual applications running on the cluster will typically have
  different network names.) The cluster name must be different from 
  the domain name, from all computer names on the domain, and from 
  other cluster names on the domain.

* Computer accounts and domain assignment for cluster nodes.

  Make sure that the cluster nodes all have computer accounts in the
  same domain. Cluster nodes cannot be in a workgroup.

* Operator user account for installing and configuring the Cluster
  service.

  To install and configure Cluster service, you must log on to 
  each node with an account that has administrative privileges on 
  those nodes.

* Cluster service user account.

  Create or obtain Cluster service user account. This is the 
  name and password under which Cluster service will run. You 
  will need to supply this user name and password during cluster 
  installation.

  The Cluster service user account should be a new account. The
  account must be a domain account; it cannot be a local account. The
  account also must have local administrative privileges on all of the
  cluster nodes. Be sure to keep the password from expiring on the
  account (follow your organization's policies for password renewal).

* Volume for important cluster configuration information (checkpoint 
  and log files).

  You need to plan to set aside a volume on your cluster storage
  for holding important cluster configuration information. This
  information makes up the quorum resource of the cluster, needed 
  when a cluster node stops functioning. The quorum resource provides 
  node-independent storage of crucial data needed by the cluster.

  The recommended minimum size for the volume is 500 MB. You should 
  use a different volume for the quorum resource than you use for user 
  data.

* List of storage devices or disks attached to the first server on 
  which you will install clustering.

  Unless the first server on which you will install clustering has 
  relatively few storage devices or disks attached to it, you should 
  make a list that identifies the ones intended for cluster storage. 
  This makes it easy to choose storage devices or disks correctly 
  during cluster configuration.

  Note: When planning and carrying out disk configuration for the 
  cluster disks, configure them as basic disks with all partitions 
  formatted as NTFS. Do not configure them as dynamic disks, and do 
  not use Encrypting File System, volume mount points, spanned volumes 
  (volume sets), or Remote Storage on the cluster disks.

The following section describes the physical installation of the
cluster storage.

3.2   Beginning the Installation of the Cluster Hardware
======================================================================

The steps you carry out when first physically connecting and
installing the cluster hardware are crucial. Be sure to follow the
hardware manufacturer's instructions for these initial steps.

Important: Carefully review your network cables after connecting them. 
Make sure no cables are crossed by mistake (for example, private 
network connected to public).

Caution: When you first attach your cluster hardware (the shared bus 
and cluster storage), be sure to work only from the firmware 
configuration screens on the cluster nodes (a node is a server in a 
cluster). On a 32-bit computer, use the BIOS configuration screens. On 
a 64-bit computer, use the Extensible Firmware Interface (EFI) 
configuration screens. The instructions from your manufacturer will 
describe whether these configuration screens are displayed 
automatically or whether you must, after turning on the computer, 
press specific keys to open them. Follow the manufacturer's 
instructions for completing the BIOS or EFI configuration process. 
Remain in the BIOS or EFI, and do not allow the operating system to 
start during this initial installation phase.

3.3   Completing the Installation
======================================================================

After the BIOS or EFI configuration is completed, start the operating
system on one cluster node only and carry out the installation of 
Cluster service. Before starting the operating system on another node, 
make sure that Whistler Datacenter Server and Cluster service are 
installed and running on that node. If the operating system is started 
on multiple nodes before Cluster service is running on one node, the 
cluster storage could be corrupted.

While remaining in the BIOS or EFI configuration screens, ensure that
you can scan the bus and see the drives from all cluster nodes.

3.4   Installation on the First Cluster Node
======================================================================

It is important that you work on one node (never two nodes) when you
exit the BIOS or EFI configuration screens and allow the operating
system to start for the first time.

Caution: Make sure that Whistler Datacenter Server and Cluster service 
are installed and running on one node before starting the operating 
system on another node. If the operating system is started on multiple 
nodes before Cluster service is running on one node, the cluster 
storage could be corrupted.

3.4.1   Completing the Installation on the First Cluster Node
----------------------------------------------------------------------

If you have not already installed Whistler Datacenter Server on the
first cluster node, install it now. For information about decisions
you must make, such as decisions about licensing and about the
components to install, see the sections earlier in this text file
series.

When Whistler Datacenter Server is installed, use the following
procedure to obtain specific information about how to complete the
installation of the cluster.

>>> To obtain additional information about how to install and 
configure Cluster service:

1. With Whistler Datacenter Server running on one cluster node, click
   Start, and then click Help and Support.

2. Click Enterprise Technologies, and then click Windows Clustering.

3. Click Server Clusters.

4. Click Checklists: Creating Server Clusters, and then click 
Checklist: Creating a server cluster.

5. Use the checklist to guide you through the process of completing
   the installation of your server cluster.