The easiest way for me to describe how ViPR does placement is to walk you through a simple block provisioning use case. To provision a block volume, a provisioning request is made to ViPR that contains: a virtual pool, virtual array, host, and size of a LUN. When such a request is received, ViPR uses a number of policies to identify the right physical array and array pool to service this request. But even before a provisioning request is made, ViPR Controller has a list of Array Pools that match the definition of the Virtual Pool i.e., a set of attributes that describe desired characteristics of storage to be provisioned. The Virtual Array definition further restricts the Array Pools that can be considered for the provisioning request. Thus when VIPR receives a request to provision a block volume it already has a list candidate Array Pool. ViPR then proceeds to eliminate Array Pools that don’t have enough physical space or are oversubscribed (in the case of thin provisioning). The Array Pools left in the list are now candidates for further examination to select the best candidate. This involves selecting an Array Pool that has the most available capacity, is least subscribed (for thin pools only), and has the least number of volumes provisioned.
The second step depends on the type of array finding the right front-end ports and engines to complete the provisioning process. The goal of this selection process is that over time port allocation is balanced across all directors and engines. The port selection takes into account the maximum number of paths value set in the Virtual Pool. For example, on a VMAX array with the number of paths set to two, two ports are selected such that each is connected to a different VSAN and that they are from different directors on different engines. Similarly on a VNX, ViPR allocates two front-end ports per VNX Storage Processor that are on separate VSANs creating an active and a passive path.
In the future with ViPR 1.1 and ViPR 2.0 releases, we will continue to enhance our placement algorithms. For example, we are planning to add bandwidth based placement i.e., using port usage information for selecting the front-end ports. We are also planning to add more flexibility around defining the number of paths from the storage to the host such as minimum number of paths and paths per initiator.