Setting Up a Unix Host to Use FC Storage
Requirements for setting up a host
These system and network requirements must be met before setting up a host to use Unity storage.
Before you can set up a host to use Unity storage, the following storage system and network requirements must be met.
SAN requirements
For a host to connect to FC LUNs or VMware VMFS and Block VVol datastores on the Unity system, the host must be in a SAN environment with the storage system, and zoned so that the host and the storage system are visible to each other over the SAN. For a multi-pathing environment, each Unity FC LUN for the host must have two paths associated with it. These two paths should be on different switches to ensure high availability.
Path management SAN requirements
When implementing a highly-available SAN between a host and the Unity system, keep in mind that:
- A LUN or VMware VMFS datastore is visible to both SPs.
- You can configure multiple paths for a LUN. These paths should be associated with separate physical ports on the same SP.
- Each LUN must present the same LUN ID to all hosts.
|
Note:
Directly attaching a host to a storage system is supported if the host connects to both SPs and has the required multipath software.
|
Storage system requirements
- Install and configure the system using the Initial Configuration wizard.
- Use Unisphere or the CLI to configure NAS servers or interfaces, or Fibre Channel (FC) LUNs, on the storage system.
|
Note:
On an HP-UX host, the iSCSI initiator will not discover the FC storage if it does not detect a LUN from the storage system assigned to host LUN ID 0. We recommend that you create a unique target, create a LUN on this interface, and give it access to the HP-UX host. The first LUN that you assign to a host is automatically assigned host LUN ID 0.
|
Using multi-path management software on the host
Multi-path management software manages the connections (paths) between the host and the storage system should one of the paths fail. The following types of multi-path managements software are available for a host connected to a storage system:
- EMC PowerPath software on an HP-UX, Linux, or Solaris host
- Native mulitpath software on a Citrix XenServer, HP-UX 11i, Linux, or Solaris host
For compatibility and interoperability information, refer to the Unity Support Matrix on the support website.
Setting up a system for multi-path management software
For a system to operate with hosts running multi-path management software, each LUN on the system should be associated with two paths.
Installing PowerPath
- On the host or virtual machine, download the latest PowerPath version from the PowerPath software downloads section on the Online Support website.
- Install PowerPath using a Custom installation and the Celerra option, as described in the appropriate PowerPath installation and administration guide for the host’s or virtual machine’ operating system.
This guide is available on Online Support. If the host or virtual machine is running the most recent version and a patch exists for this version, install it, as described in the readme file that accompanies the patch.
- When the installation is complete, reboot the host or virtual machine.
- When the host or virtual machine is back up, verify that the PowerPath service has started.
Installing native multipath software
Whether you need to install multipath software, depends on the host’s operating system.
Citrix XenServer
By default XenServer uses the Linux native multipathing (DM-MP) as it multipath handler. This handler is packaged with the Citrix XenServer operating system software.
Linux
To use Linux native multipath software, you must install the Linux multipath tools package as described in Installing or updating the Linux multipath tools package.
HP-UX 11i
Native multipath failover is packaged with the HP-UX operating system software.
Solaris
Sun’s native path management software is Sun StorEdge™ Traffic Manager (STMS).
For Solaris 10 — STMS is integrated into the Solaris operating system patches you install. For information on install patches, refer to the Sun website.
Installing or updating the Linux multipath tools package
To use Linux native multipath failover software, the Linux multipath tools package must be installed on the host. This package is installed by default on SuSE SLES 10 or higher, but is not installed by default on Red Hat.
If you need to install the multipath tools package, install the package from the appropriate website below.
For SuSE:
The multipath tools package is included with SuSE SLES 9 SP3 and you can install it with YaST or RPM.
For Red Hat:
The multipath tools package is included with Red Hat RHEL4 U3 or RHEL5, and you can install it with YaST or Package Manager. If an update is available, follow the instructions for installing it on the http://www.redhat.com website.
What's next?
Do one of the following:
- To set up an AIX host to use storage, refer to AIX host — Setting up for FC storage .
- To set up a Citrix XenServer host to use storage, refer to Citrix XenServer host — Setting up for FC storage.
- To set up an HP-UX host to use storage, refer to HP-UX host — Setting up for FC storage.
- To set up a Linux host to use storage, refer to Linux host — Setting up for iSCSI storage.
- To set up a Solaris host to use storage, refer to Solaris host — Setting up for FC storage.
AIX host — Setting up for FC storage
To set up an AIX host to use LUNs over Fibre Channel, perform these tasks:
Install AIX software
- Log in to the AIX host using an account with administrator privileges.
- Download the AIX ODM Definitions software package to the /tmp directory on the AIX host as follows:
- Navigate to AIX ODM Definitions on the software downloads section on the Support tab of the Online Support website.
- Choose the version of the EMC ODM Definitions for the version of AIX software running on the host, and save the software to the /tmp directory on the host.
- Start the System Management Interface Tool to install the software:
smit installp
- In the /tmp directory, uncompress and untar the EMC AIX fileset for the AIX version running on the host:
uncompress EMC.AIX.x.x.x.x.tar.z tar -xvf EMC.AIX.x.x.x.x.tar
- In the Install and Update Software menu, select Install and Update from ALL Available Software and enter /tmp as the path to the software.
- Select SOFTWARE to install.
- After making any changes to the displayed values, press Enter.
- Scroll to the bottom of the window to see the Installation Summary, and verify that the message “SUCCESS” appears.
- Reboot the AIX host to have the changes take effect.
Configure LUNs as AIX disk drives
- Remove any drives that are identified as "Other FC SCSI Disk Drive" by the system by running the following command.
lsdev -Cc disk | grep “Other FC SCSI Disk Drive” | awk {‘print $1’} | xargs -n1 rmdev -dl
- When applicable, uninstall any existing CLARiiON ODM file sets.
installp -u EMC.CLARiiON.*
- Use the following commands to download the AIX ODM package version 5.3.x or 6.0.x from the FTP server at ftp.emc.com.
Note: IBM AIX Native MPIO for Unity requires a different ODM package. Contact your service provider for more information.
- Access the FTP server by issuing the following command:
ftp ftp.emc.com
- Log in with a user name of anonymous and use your email address as a password.
- Access the directory that contains the ODM files:
cd /pub/elab/aix/ODM_DEFINITIONS
- Download the ODM package
get EMC.AIX.5.3.x.x.tar.Z
or
get EMC.AIX.6.0.x.x.tar.Z
- Access the FTP server by issuing the following command:
- Prepare the files for installation.
- Move the ODM package into the user install directory.
cd /usr/sys/inst.images
- Uncompress the files.
uncompress EMC.AIX.5.3.x.x.tar.Z
uncompress EMC.AIX.6.0.x.x.tar.Z
- Open, or untar, the files.
tar -xvf EMC.AIX.5.3.x.x.tar
tar -xvf EMC.AIX.6.0.x.x.tar
- Create or update the TOC file.
inutoc
- Move the ODM package into the user install directory.
- Install the files.
- PowerPath:
installp -ac -gX -d . EMC.CLARiiON.aix.rte installp -ac -gX -d . EMC.CLARiiON.fcp.rte
- MPIO:
installp -ac -gX -d . EMC.CLARiiON.aix.rte installp -ac -gX -d . EMC.CLARiiON.fcp.MPIO.rte
Note: You can also install the files using the AIX smitty command. - PowerPath:
Scan and verify LUNs
This task explains how to scan the system for LUNs using AIX, PowerPath, or MPIO.
- Use AIX to scan for drives using the following command:
cfgmgr
- Verify that all FC drives have been configured properly, and display any unrecognized drives.
lsdev -Cc disk
PowerPath output example:
hdisk1 Available EMC CLARiiON FCP VRAID Disk hdisk2 Available EMC CLARiiON FCP VRAID Disk
MPIO output example:
hdisk1 Available EMC CLARiiON FCP MPIO VRAID Disk hdisk2 Available EMC CLARiiON FCP MPIO VRAID Disk
Prepare the LUNs to receive data
If you do not want to use a LUN as a raw disk or raw volume, then before AIX can send data to the LUN, you must either partition the LUN or create a database file systems on it. For information on how to perform these tasks, refer to the AIX operating system documentation.
Citrix XenServer host — Setting up for FC storage
To set up a Citrix XenServer host to use LUNs over Fibre Channel, perform these tasks:
Configure the FC target
- Open the XenCenter console.
- Click New Storage at the top of the console.
- In the New Storage dialog box, under Virtual disk storage, select Hardware HBA.
- Under Name, enter a descriptive name for the LUN (Storage Repository).
- Click Next.
- Select a LUN, and click Finish.
The host scans the target to see if it has any XenServer Storage Repositories (SRs) on it already, and if any exist you are asked if you want to attach to an existing SR or create a new SR.
Configure the FC target for multipathing
If you enable multipathing while connected to a storage repository, XenServer may not configure multipathing successfully. If you already created the storage repository and want to configure multipathing, put all hosts in the pool into Maintenance Mode before configuring multipathing and then configure multipathing on all hosts in the pool. This ensures that any running virtual machines that have LUNs in the affected storage repository are migrated before the changes are made.
- In XenCenter enable the multipath handler:
- On the host’s Properties dialog box, select the Multipathing tab.
- On the Multipathing tab, select Enable multipathing on this server.
- Verify that multipathing is enabled by clicking the storage resource’s Storage general properties.
HP-UX host — Setting up for FC storage
To set up an HP-UX host to use LUNs over Fibre Channel, perform these tasks:
Download and install the HP-UX FC HBA software
- On the HP-UX host, open a web browser and download the initiator software from the HP-UX website.
- Install the initiator software using the information on the site or that you downloaded from the site.
Make the storage processors available to the host
Verify that each NIC sees only the storage processors (targets) to which it is connected:
ioscan -fnC disk
insf -e
ioscan -NfC disk (for HP-UX 11i v3 only)
Verify that native multipath failover sees all paths to the LUNs
- Rescan for the LUNs:
ioscan -NfC disk| insf -e
- View the LUNs available to the host:
ioscan -NfnC disk
- Verify that all paths to the storage system are CLAIMED:
ioscan -NkfnC lunpath
Prepare the LUNs to receive data
- Make the LUN visible to HP-UX.
- Create a volume group on the LUN.
Linux host — Setting up for FC storage
To set up a Linux host to use LUNs over Fibre Channel, perform these tasks:
Scan the storage system for LUNs
Execute the Linux scan LUNs command.
# lsscsi |egrep -i dgc
[13:0:2:0] disk DGC LUNZ 4200 /dev/sdj
[13:0:4:0] disk DGC LUNZ 4200 /dev/sdo
[13:0:5:0] disk DGC LUNZ 4200 /dev/sdv
[13:0:6:0] disk DGC LUNZ 4200 /dev/sdz
[14:0:2:0] disk DGC LUNZ 4200 /dev/sdm
[14:0:4:0] disk DGC LUNZ 4200 /dev/sdu
[14:0:5:0] disk DGC LUNZ 4200 /dev/sdx
[14:0:6:0] disk DGC LUNZ 4200 /dev/sdy
[15:0:2:0] disk DGC LUNZ 4200 /dev/sdac
[15:0:4:0] disk DGC LUNZ 4200 /dev/sdag
………
|
Note:
The first column in the output shows [
Host:Bus:Target:LUN] of each SCSI device, with the last value representing the LUN number.
|
- In Unisphere, grant LUN access to the Linux host.
Note: Ensure that at a LUN with LUN ID 0 is present on the Unity system. See Modify Host LUN IDs for information on manually changing LUN IDs.
- On the Linux server, run the SCSI bus scan command with the -r option:
rescan-scsi-bus.sh -a -r
- On the Linux server, rerun the lsscsi |egrep -i dgc command to verify the LUN IDs show up appropriately on the Linux host.
# lsscsi |egrep -i dgc [13:0:2:0] disk DGC VRAID 4200 /dev/sdbl [13:0:2:1] disk DGC VRAID 4200 /dev/sdcf [13:0:2:2] disk DGC VRAID 4200 /dev/sdcg [13:0:4:0] disk DGC VRAID 4200 /dev/sdad [13:0:4:1] disk DGC VRAID 4200 /dev/sdch [13:0:4:2] disk DGC VRAID 4200 /dev/sdci [13:0:5:0] disk DGC VRAID 4200 /dev/sdbj [13:0:5:1] disk DGC VRAID 4200 /dev/sdcj [13:0:5:2] disk DGC VRAID 4200 /dev/sdck ………
- If LUNZ continues to display, rerun the rescan command using the --forcerescan option.
rescan-scsi-bus.sh --forcerescan
If the issue persists and LUNZ still displays, a Linux reboot may be required in order for Linux to recognize the LUNs. Refer to the following Linux knowledgebase article for more information: https://www.suse.com/support/kb/doc/?id=7009660
Set up the Linux host to use the LUN
- Find the LUN ID:
- In Unisphere, select .
- On the LUN, select Edit.
- On the Properties window, select to determine the LUN ID.
- On the host, partition the LUN.
- Create a file system on the partition.
- Create a mount directory for the file system.
- Mount the file system.
Solaris host — Setting up for FC storage
To set up a Solaris host to use LUNs over Fibre Channel, perform these tasks:
Configure Sun StorEdge Traffic Manager (STMS)
- Enable STMS by editing the following configuration file:
Solaris 10 — Do one of the following:
- Edit the /kernel/drv/fp.conf file by changing the mpxio-disable option from yes to no.
or
- Execute the following command:
stmsboot -e
- Edit the /kernel/drv/fp.conf file by changing the mpxio-disable option from yes to no.
- We recommend that you enable the STMS auto-restore feature to restore LUNs to their default SP after a failure has been repaired. In Solaris 10, auto-restore is enabled by default.
- If you want to install STMS offline over NFS, share the root file system of the target host in a way that allows root access over NFS to the installing host, if you want to install STMS offline over NFS. You can use a command such as the following on target_host to share the root file system on target_host so that installer_host has root access:
share -F nfs -d ‘root on target_host‘ -o ro,rw=installer host,root=installer_host /
- For the best performance and failover protection, we recommend that you set the load balancing policy to round robin:
setting load-balance=”round-robin”
Prepare the LUN to receive data
- Partition the LUN.
- Create and mount a files system on the partition.
What's next?
You are now ready to either migrate data to the LUN or have the host start using the LUN. To migrate data to the LUN, go to Migrating FC or iSCSI Data to the Storage System.