Setting Up a Unix Host to Use iSCSI Storage
Requirements for setting up a host
These system and network requirements must be met before setting up a host to use Unity storage.
Before you can set up a host to use Unity storage, the following storage system and network requirements must be met.
Network requirements
For a host to connect to LUNs on an iSCSI interface, the host must be in the same network environment with the iSCSI interface. To achieve best performance, the host should be on a local subnet with each iSCSI interface that provides storage for it. In a multi-path environment, each physical interface must have two IP addresses assigned; one on each SP. The interfaces should be on separate subnets.
|
Note:
The Linux iSCSI driver, which is part of the Linux operating system and which you configure so that the host iSCSI initiators can access the iSCSI storage, does not distinguish between NICs on the same subnet. As a result, to achieve load balancing, an iSCSI interface connected to a Linux host must have each NIC configured on a different subnet.
|
To achieve maximum throughput, connect the iSCSI interface and the hosts for which it provides storage to their own private network. That is, a network just for them. When choosing the network, consider network performance.
Path management network requirements
|
Note:
Path management software is not supported for a Windows 7 or Mac OS host connected to a Unity system.
|
When implementing a highly-available network between a host and your system, keep in mind that:
- A LUN is visible to both SPs
- You can configure up to 8 IPs per physical interface. If more than one interface is configured on a physical interface, each interface must be configured on a separate VLAN.
- Network switches may be on separate subnets.
|
Note:
Directly attaching a host to a Unity system is supported if the host connects to both SPs and has the required multipath software.
|
The following figure shows a highly-available iSCSI network configuration for hosts accessing a storage resource (iSCSI LUNs). Switch A and Switch B are on separate subnets. Host A and Host B can each access the storage resource through separate NICs. If the storage resource is owned by SP A, the hosts can access the storage resource through the paths to the interfaces on SP A. Should SP A fail, the system transfers ownership of the resource to SP B and the hosts can access the storage resource through the paths to the interfaces on SP B.

Storage system requirements
- Install and configure the system using the Initial Configuration wizard.
- Use Unisphere or the CLI to configure NAS servers or interfaces, or iSCSI LUNs, on the storage system.
|
Note:
On an HP-UX host, the iSCSI initiator will not discover the iSCSI storage if it does not detect a LUN from the storage system assigned to host LUN ID 0. We recommend that you create a unique target, create a LUN on this interface, and give it access to the HP-UX host. The first LUN that you assign to a host is automatically assigned host LUN ID 0.
|
Using multi-path management software on the host
Multi-path management software manages the connections (paths) between the host and the storage system should one of the paths fail. The following types of multi-path managements software are available for a host connected to a storage system:
- EMC PowerPath software on an HP-UX, Linux, or Solaris host
- Native mulitpath software on a Citrix XenServer, HP-UX 11i, Linux, or Solaris host
For compatibility and interoperability information, refer to the Unity Support Matrix on the support website.
Setting up your system for multi-path management software
For your system to operate with hosts running multi-path management software, two iSCSI IPs are required. These IPs should be on separate physical interfaces on separate SPs.
Verify the configuration in Unisphere. For details on how to configure iSCSI interfaces, refer to topics about iSCSI interfaces in the Unisphere online help.
|
Note:
For highest availability, use two network interfaces on the iSCSI interface. The network interfaces should be on separate subnets. You can view the network interfaces for an iSCSI interface within Unisphere.
|
Installing PowerPath
- On the host or virtual machine, download the latest PowerPath version from the PowerPath software downloads section on the Online Support website.
- Install PowerPath as described in the appropriate PowerPath installation and administration guide for the host’s or virtual machine’s operating system.
This guide is available on Online Support. If the host or virtual machine is running the most recent version and a patch exists for this version, install it, as described in the readme file that accompanies the patch.
- When the installation is complete, reboot the host or virtual machine.
- When the host or virtual machine is back up, verify that the PowerPath service has started.
Installing native multipath software
Whether you need to install multipath software, depends on the host’s operating system.
Citrix XenServer
By default XenServer uses the Linux native multipathing (DM-MP) as it multipath handler. This handler is packaged with the Citrix XenServer operating system software.
Linux
To use Linux native multipath software, you must install the Linux multipath tools package as described in Installing or updating the Linux multipath tools package.
HP-UX 11i
Native multipath failover is packaged with the HP-UX operating system software.
Solaris
Sun’s native path management software is Sun StorEdge™ Traffic Manager (STMS).
For Solaris 10 — STMS is integrated into the Solaris operating system patches you install. For information on install patches, refer to the Sun website.
Installing or updating the Linux multipath tools package
To use Linux native multipath failover software, the Linux multipath tools package must be installed on the host. This package is installed by default on SuSE SLES 10 or higher, but is not installed by default on Red Hat.
If you need to install the multipath tools package, install the package from the appropriate website below.
For SuSE:
The multipath tools package is included with SuSE SLES 9 SP3 and you can install it with YaST or RPM.
For Red Hat:
The multipath tools package is included with Red Hat RHEL4 U3 or RHEL5, and you can install it with YaST or Package Manager. If an update is available, follow the instructions for installing it on the http://www.redhat.com website.
What's next?
Do one of the following:
- To set up an AIX host to use storage, refer to AIX host — Setting up for iSCSI storage.
- To set up a Citrix XenServer host to use storage, refer to Citrix XenServer host — Setting up for iSCSI storage.
- To set up an HP-UX host to use storage, refer to HP-UX host — Setting up for iSCSI storage.
- To set up a Linux host to use storage, refer to Linux host — Setting up for iSCSI storage.
- To set up a Solaris host to use storage, refer to Solaris host — Setting up for iSCSI storage.
AIX host — Setting up for iSCSI storage
To set up an AIX host to use iSCSI storage, perform these tasks:
Install AIX software
- Log in to the AIX host using an account with administrator privileges.
- Download the AIX ODM Definitions software package to the /tmp directory on the AIX host as follows:
- Navigate to AIX ODM Definitions on the software downloads section on the Support tab of the Online Support website.
- Choose the version of the EMC ODM Definitions for the version of AIX software running on the host, and save the software to the /tmp directory on the host.
- Start the System Management Interface Tool to install the software:
smit installp
- In the /tmp directory, uncompress and untar the EMC AIX fileset for the AIX version running on the host:
uncompress EMC.AIX.x.x.x.x.tar.z tar -xvf EMC.AIX.x.x.x.x.tar
- In the Install and Update Software menu, select Install and Update from ALL Available Software and enter /tmp as the path to the software.
- Select SOFTWARE to install.
- After making any changes to the displayed values, press Enter.
- Scroll to the bottom of the window to see the Installation Summary, and verify that the message “SUCCESS” appears.
- Reboot the AIX host to have the changes take effect.
Configure the AIX iSCSI initiator
- On the storage system, from the iSCSI Interfaces page in Unisphere ( ), determine the IQN and the IP address of the storage system iSCSI interface (target) to which you want the host initiator to connect.
- On the AIX host, start the System Management Interface Tool:
smit
- Using a text editor, open the file /etc/iscsi/targets.
- For each iSCSI interface to be accessed by this initiator, add a line in the format:
{portal} {port} {target_iqn}
where:
- {portal} = IP address of the network portal
- {port} = number of the TCP listening port (default is 3260)
- {target_iqn} = formal iSCSI name of the target
Configure LUNs as AIX disk drives
- Remove any drives that are identified as "Other FC SCSI Disk Drive" by the system by running the following command.
lsdev -Cc disk | grep “Other FC SCSI Disk Drive” | awk {‘print $1’} | xargs -n1 rmdev -dl
- When applicable, uninstall any existing CLARiiON ODM file sets.
installp -u EMC.CLARiiON.*
- Use the following commands to download the AIX ODM package version 5.3.x or 6.0.x from the FTP server at ftp.emc.com.
Note: IBM AIX Native MPIO for Unity requires a different ODM package. Contact your service provider for more information.
- Access the FTP server by issuing the following command:
ftp ftp.emc.com
- Log in with a user name of anonymous and use your email address as a password.
- Access the directory that contains the ODM files:
cd /pub/elab/aix/ODM_DEFINITIONS
- Download the ODM package
get EMC.AIX.5.3.x.x.tar.Z
or
get EMC.AIX.6.0.x.x.tar.Z
- Access the FTP server by issuing the following command:
- Prepare the files for installation.
- Move the ODM package into the user install directory.
cd /usr/sys/inst.images
- Uncompress the files.
uncompress EMC.AIX.5.3.x.x.tar.Z
uncompress EMC.AIX.6.0.x.x.tar.Z
- Open, or untar, the files.
tar -xvf EMC.AIX.5.3.x.x.tar
tar -xvf EMC.AIX.6.0.x.x.tar
- Create or update the TOC file.
inutoc
- Move the ODM package into the user install directory.
- Install the files.
- PowerPath:
installp -ac -gX -d . EMC.CLARiiON.aix.rte installp -ac -gX -d . EMC.CLARiiON.fcp.rte
- MPIO:
installp -ac -gX -d . EMC.CLARiiON.aix.rte installp -ac -gX -d . EMC.CLARiiON.fcp.MPIO.rte
Note: You can also install the files using the AIX smitty command. - PowerPath:
Prepare the LUNs to receive data
If you do not want to use a LUN as a raw disk or raw volume, then before AIX can send data to the LUN, you must either partition the LUN or create a database file systems on it. For information on how to perform these tasks, refer to the AIX operating system documentation.
Citrix XenServer host — Setting up for iSCSI storage
To set up a Citrix XenServer host to use iSCSI storage, perform these tasks:
Configure the iSCSI software initiator
- On the storage system, from the iSCSI Interfaces page in Unisphere ( ), determine the IP address of the system interface (target) to which you want the host initiator to connect.
- Open the XenCenter console.
- Click New Storage at the top of the console.
- In the New Storage dialog box, under Virtual disk storage, select iSCSI.
- Under Name, enter a descriptive name of the virtual disk (Storage Repository).
- To use optional CHAP
- Check Use CHAP.
- Enter the CHAP username and password.
- Click Discover IQNs.
- Click Discover LUNs.
- Once the IQN and LUN fields are populated, click Finish.
The host scans the target to see if it has any XenServer Storage Repositories (SRs) on it already, and if any exist you are asked if you want to attach to an existing SR or create a new SR.
Configure the iSCSI software initiator for multipathing
If you enable multipathing while connected to a storage repository, XenServer may not configure multipathing successfully. If you already created the storage repository and want to configure multipathing, put all hosts in the pool into Maintenance Mode before configuring multipathing and then configure multipathing on all hosts in the pool. This ensures that any running virtual machines that have LUNs in the affected storage repository are migrated before the changes are made.
- In XenCenter enable the multipath handler:
- On the host’s Properties dialog box, select the Multipathing tab.
- On the Multipathing tab, select Enable multipathing on this server.
- Verify that multipathing is enabled by clicking the storage resource’s Storage general properties.
HP-UX host — Setting up for iSCSI storage
To set up an HP-UX host to use iSCSI storage, perform these tasks:
Download and install the HP-UX iSCSI initiator software
- On the HP-UX host, open a web browser and download the iSCSI initiator software from the HP-UX website.
- Install the initiator software using the information on the site or that you downloaded from the site.
Configure HP-UX access to an iSCSI interface (target)
To configure access to an iSCSI interface:
- Log into the HP-UX host as superuser (root).
- Add the path for the iscsi util and other iSCSI executables to the root path:
PATH=$PATH:/opt/iscsi/bin
- Verify the iSCSI initiator name:
iscsiutil -1
The iSCSI software initiator configures a default initiator name in an iSCSI Qualified Name (IQN) format.
For example:
iqn.1986-03.com.hp:hpfcs214.2000853943
To change the default iSCSI initiator name or reconfigure the name to an IEEE EUI-64 (EUI) format, continue to the next step; otherwise skip to step 5.
- Configure the default iSCSI initiator name:
iscsiutil [iscsi-device-file] -i -N iscsi-initiator-name
Note: For mor information on IQN and EUI formats, refer to the HP-UX iscsi software initiator guide.where:
- iscsi-device-file is the iSCSI device path, /dev/iscsi, and is optional if you include the -i or -N switches in the command.
- -i configures the iSCSI initiator information.
- -N is the initiator name. When preceded by the -i switch, it requires the iSCSI initiator name. The first 256 characters of the name string are stored in the iSCSI persistent information.
- iscsi-initiator-name is the initiator name you have chosen, in IQN or EUI format.
- Verify the new iSCSI initiator name:
iscsiutil -1
- For each iSCSI target device you will statically identity, store the target device information in the kernel registry, adding one or more discovery targets:
iscsitutil [/dev/iscsi] -a -I ip-address/hostname [-P tcp-port] [-M portal-grp-tag]
where
- -a adds a discovery target address into iSCSI persistent information. You can add discovery target addresses only with this option.
- -I requires the IP address or hostname of the discovery target address.
- ip-address/hostname is the IP address or host name component of the target network portal.
- -P tcp-port is the listening TCP port component of the discovery target network portal (optional). The default iSCSI TCP port number is 3260.
- -M portal-grp-tag is the target portal group tag (optional). The default target portal group tag for discovery targets is 1.
For example:
iscsiutil -a -I 192.1.1.110
or, if you specify the hostname,
iscsiutil -a -I target.hp.com
If an iSCSI TCP port used by the discovery target is different than the default iSCSI port of 3260, you must specify the default TCP port used by the discovery target, for example,
iscsiutil -a -I 192.1.1.110 -P 5001
or
iscsiutil -a -I target.hp.com -P 5001
- Verify the discovery targets that you have configured:
iscsiutil -p -D
- To discover the operational target devices:
/usr/sbin/ioscan -H 225 ioscan -NfC disk (for HP-UX 11i v3 only)
- To create the device files for the targets:
/usr/sbin/insf -H 225
- To display operational targets:
iscsiutil -p -O
Make the storage processors available to the host
Verify that each NIC sees only the storage processors (targets) to which it is connected:
ioscan -fnC disk
insf -e
ioscan -NfC disk (for HP-UX 11i v3 only)
Verify that native multipath failover sees all paths to the LUNs
- Rescan for the LUNs:
ioscan -NfC disk| insf -e
- View the LUNs available to the host:
ioscan -NfnC disk
- Verify that all paths to the storage system are CLAIMED:
ioscan -NkfnC lunpath
Prepare the LUNs to receive data
- Make the LUN visible to HP-UX.
- Create a volume group on the LUN.
Linux host — Setting up for iSCSI storage
To set up a Linux host to use iSCSI storage, perform these tasks:
Configure Linux iSCSI initiator software
|
Note:
The Linux iSCSI driver gives the same name to all network interface cards (NICs) in a host. This name identifies the host, not the individual NICs. This means that if multiple NICs from the same host are connected to an iSCSI interface on the same subnet, then only one NIC is actually used. The other NICs are in standby mode. The host uses one of the other NICs only if the first NIC fails.
|
Each host connected to an iSCSI storage system must have a unique iSCSI initiator name for its initiators (NICs). To determine a host’s iSCSI initiator name for its NICs use cat /etc/iscsi/initiatorname.iscsi for open-iscsi drivers. If multiple hosts connected to the iSCSI interface have the same iSCSI initiator name, contact your Linux provider for help with making the names unique.
To configure the Linux open-iscsi driver:
|
Note:
The EMC Host Connectivity Guide for Linux on the EMC Online Support website provides the latest information about configuring the
open-iscsi driver.
|
- On the storage system, from the iSCSI Interfaces page in Unisphere ( ), determine the IP address of the storage system iSCSI interface (target) to which you want the host initiators to connect.
- For any Linux initiators connected to the iSCSI interface with CHAP authentication enabled, stop the iSCSI service on the Linux host.
- Using a text editor, such as vi, open the /etc/iscsi/iscsi.conf file.
- Uncomment (remove the # symbol) before the recommended variable settings in the iSCSI driver configuration file as listed in the table below:
Table 1. Open-iscsi driver recommended settings Variable nameDefault settingRecommended settingnode.startupmanualautonode.session.iscsi.InitialR2TNoYesnode.session.iscsi.ImmediateDataYesNonode.session.timeo.replacment_timeout120120Note: In congested networks you may increase this value to 600. However, this time must be greater than the combined node.conn[0].timeo.timeo.noop_out_interval and node.conn[0].timeo.timeo.noop_out_time times.node.conn[0].timeo.timeo.noop_out_interval10later in congested networks This value should not exceed the values in node.session.timeeo.replacement_timeout.node.conn[0].timeo.timeo.noop_out_timeout15 - To start the iSCSI service automatically on reboot and powerup, set the run level to 345 for the iSCSI service.
- Discover and log in to the host to which you want to connect with the iscsiadm command for Red Hat 5 or later or YaST for SuSE 10 or later.
You need to perform a discovery on only a single IP address because the storage system also returns its other iSCSI target, if it is configured for a second iSCSI interface.
- Configure optional CHAP authentication on the open-iscsi driver initiator:
For Red Hat 5 or later
Use the iscsiadm command to do the following:
For optional initiator CHAP:
- Enable CHAP as the authentication method.
- Set the username for the initiator to the initiator’s IQN, which you can find with the iscsiadm -m node command.
- Set the secret (password) for the initiator to the same secret that you entered for the host initiator on the storage system.
For optional mutual CHAP
- Set the username (username_in) to the initiator’s IQN, which you can find with the iscsiadm -m node command.
- Set the secret (password_in) for the target to the same secret that you entered for the iSCSI interface.
Use the YaST to do the following for the open-iscsi driver initiator:
For optional initiator CHAP:
- Enable incoming authentication.
- Set the initiator CHAP username to the initiator’s IQN, which you can find with the iscsiadm -m node command.
- Set the initiator CHAP password (secret) to the same secret that you entered for the host initiator on the storage system.
For mutual CHAP:
- Enable outgoing authentication (mutual CHAP).
- Set the mutual CHAP username to the initiator’s IQN, which you can find with the iscsiadm -m node command.
- Set the initiator password (secret) for the target to the same secret that you entered for the iSCSI interface.
- Find the driver parameter models you want to use, and configure them as shown in the examples in the configuration file.
- Restart the iSCSI service.
Set up the Linux host to use the LUN
- Find the LUN ID:
- In Unisphere, select .
- On the LUN, select Edit.
- On the Properties window, select to determine the LUN ID.
- On the host, partition the LUN.
- Create a file system on the partition.
- Create a mount directory for the file system.
- Mount the file system.
Solaris host — Setting up for iSCSI storage
To set up a Solaris host to use iSCSI storage, perform these tasks:
Configure Sun StorEdge Traffic Manager (STMS)
- Enable STMS by editing the following configuration file:
Solaris 10 — Do one of the following:
- Edit the /kernel/drv/fp.conf file by changing the mpxio-disable option from yes to no.
or
- Execute the following command:
stmsboot -e
- Edit the /kernel/drv/fp.conf file by changing the mpxio-disable option from yes to no.
- We recommend that you enable the STMS auto-restore feature to restore LUNs to their default SP after a failure has been repaired. In Solaris 10, auto-restore is enabled by default.
- If you want to install STMS offline over NFS, share the root file system of the target host in a way that allows root access over NFS to the installing host, if you want to install STMS offline over NFS. You can use a command such as the following on target_host to share the root file system on target_host so that installer_host has root access:
share -F nfs -d ‘root on target_host‘ -o ro,rw=installer host,root=installer_host /
- For the best performance and failover protection, we recommend that you set the load balancing policy to round robin:
setting load-balance=”round-robin”
Configure Solaris access to an iSCSI interface (target)
To configure access to an iSCSI interface:
- Log into the Solaris system as superuser (root).
- Configure the target device to be discovered using SendTargets dynamic discovery.
Example:
iscsiadm modify discovery-address 10.14.111.222:3260
Note: If you do not want the host to see specific targets, use the static discovery method as described in the Solaris server documentation. - Enable the SendTargets discovery method.
Examples:
iscsiadm modify discovery --sendtargets enable
or
iscsiadm modify discovery -t enable
- Create the iSCSI device links for the local system.
For example:
devfsadm -i iscsi
- If you want Solaris to login to the target more than once (multiple paths), use:
iscsiadm modify target-param -c <logins> <target_iqn>
where logins is the number of logins and target_iqn is the IQN of the iSCSI interface (target).
Note: You can determine the IQN of the iSCSI interface from Unisphere on the iSCSI Interfaces page ( ).
Prepare the LUN to receive data
- Partition the LUN.
- Create and mount a files system on the partition.
What's next?
You are now ready to either migrate data to the LUN or have the host start using the LUN. To migrate data to the LUN, go to Migrating FC or iSCSI Data to the Storage System.
iSCSI session troubleshooting
If the session cannot be established, or you get unexpected results from the session, follow this procedure:
- Use ping with the IP address to verify connectivity from the host to the target’s IP address.
Using the IP address avoids name resolution issues.Note: You can find the IP address for the target by selecting in Unisphere.
Some switches intentionally drop ping packets or lower their priority during times of high workload. If the ping testing fails when network traffic is heavy, verify the switch settings to ensure the ping testing is valid.
- Check the host routing configuration using Unisphere under .
- On the host, verify that the iSCSI initiator service is started.
Note: The iSCSI service on the iSCSI interface starts when the system is powered up.
- In the Microsoft iSCSI Initiator, verify the following for the target portal:
- IP address(es) or DNS name of the storage system iSCSI interface with the host’s LUNs.
Note: For a host running PowerPath or Windows native failover, the target portal has two IP addresses.
- Port is 3260, which is the default communications port for iSCSI traffic.
- IP address(es) or DNS name of the storage system iSCSI interface with the host’s LUNs.
- Verify that the iSCSI qualified names (IQN) for the initiators and the iSCSI interface name for the target are legal, globally unique, iSCSI names.
Note: An IQN must be a globally unique identifier of as many as 223 ASCII characters.
For a Linux host initiator — You can find this IQN with the iscsiadm -m node command, which lists the IP address and associated iqn for each iSCSI initiator.
For a Solaris host initiator — You can find this IQN with the iscsi list initiator-node command.
- If you are using optional CHAP authentication, ensure that the following two secrets are identical by resetting them to the same value:
- The secret for the host initiator in the host’s iSCSI software.
- The secret for the iSCSI interface on the iSCSI interface.
- If you are using optional mutual CHAP authentication, ensure that the following two secrets are identical by resetting them to the same value:
- The secret for the host initiator in the host’s iSCSI software.
- The secret for the iSCSI interface on the iSCSI interface. You can find this secret in the CHAP section of the Access Settings page in Unisphere ( ).