I/O Virtualization to Optimize Data Center Operations
Virtualization is gaining momentum in the IT industry and has become the sweet spot in server and data center consolidation by reducing capital investment. Virtualization brings out greater level of agility by involving practices such as clustering, partitioning, workload management and other virtualization techniques for data center. I/O virtualization acts as an abstraction layer to augment data centers and it appears as a thin layer between the servers while accessing interface cards and actual cards themselves.
Quality of Service (QoS) and N_Port ID Virtualization (NPIV) features of I/O virtualization ensures that significant applications perform with guaranteed level of performance by exchanging cards across multiple servers. However, CIOs need to understand that I/O virtualization does not allocate bandwidth to servers but guarantee all or more of the bandwidth is used.
Understand Which Servers Are Suitable For Your I/O Virtualization
Enterprises hunt for proper servers before they start sharing cards as servers are also engaged in bandwidth utilization. While using SR-IOV with share card method the only concern is that a server may starve other server by consuming its I/O resources in shared card method. Therefore, it becomes clear that a single server cannot be allotted with bandwidth, otherwise other servers will not sustain in network.
Instead, organizations can look out for servers in high demands and connect them individually or leverage I/O cards. On the other hand, some systems with I/O virtualization dynamically assign a specific card to a specific server. To get started safely with I/O virtualization, CIOs can use it as a depository for surplus cards as most servers utilize redundant network and storage. For an instance, to include redundancy, a 10-server rack that can add up to 20 extra cards could add up to $40,000 during unavoidable circumstances.
When there is a network or storage failure, only a card with cable connection or an SFP connector on the switch is lost. Then, the server will map the card within the system and continue till the end of primary card process.
Ways to Share Cards using I/O Virtualization for Data Center
• Once I/O virtualization is initiated by understanding the process used to virtualize NICs and host bus adapter (HBAs), servers can make use of interface cards. Servers work individually turn-by-turn when expensive application card is involved at certain point of the day.
• CIOs can use multi-port cards while with I/O virtualization which addresses each port individually. The approach of using multi-port cards helps in reducing costs and enables efficient usage of bandwidth. It does not even increase bandwidth utilization as each server has its own port and bandwidth.
• The final approach is to select NIC cards which support ‘Single Root I/O Virtualization (SR-IOV)’ to enable multiple hosts. Single card can be shared with this approach and this method is considered to ideal for systems running under I/O virtualization. SR-IOV enabled cards share bandwidth across multiple servers and cards such as 13 Gigabit Ethernet (10GbE) and next generation Fibre Channel over Ethernet (FCoE) can be used.SR-IOV cards enable different servers to be used at peak time when needed at different times.
How to Connect NICs with Servers
Once cards are selected to be shared, the next step is to connect servers to access I/O virtualization system for data centers. To accomplish this task, Ethernet, InfiniBand, and PCIe can be used initially as follow:
• When PCIe cards are shared, PCIe seems to be the perfect fit at first place to connect NICs.
• InfiniBand is designed to be networked but transporting PCIe traffic is not possible with it. It also supports PCIe bandwidth and enables performance capabilities. The adoption rate is found to be quite slow other than in back-end interconnection.
• Ethernet technology is omnipresent and designed to be networkable and with the help of features of Ethernet, Ethernet cards can be developed in future.
Connection criteria play an important role while using any of the methods and prior examination of each method is needed to be done. It is upto organizations to opt for particular method that will enhance performance and network scalability. ‘PCIe’ implementation can be least-expensive and the substitute can be ‘Ethernet’ to establish connection with servers.
MDC-The Arrival of Future Data Centers
By Nancy S. Wolk, CIO, Alcoa - Global Business Services
By John Kamin, EVP and CIO, Old National Bancorp
By Gregg T. Martin, VP & CIO, Arnot Health
By Elliot Garbus, VP-IoT Solutions Group & GM-Automotive...
By Bryson Koehler, EVP & CIO, The Weather Company, an IBM...
By Gregory Morrison, SVP & CIO, Cox Enterprises
By Adrian Mebane, VP-Global Ethics & Compliance, The Hershey...
By Lowell Gilvin, Chief Process Officer, Jabil
By Dennis Hodges, CIO, Inteva Products
By Gerri Martin-Flickinger, CIO, Adobe Systems
By Walter Carvalho, VP& Corporate CIO, Carnival Corporation
By Mary Alice Annecharico, SVP & CIO, Henry Ford Health System
By Bernd Schlotter, President of Services, Unify
By Bob Fecteau, CIO, SAIC
By Kushagra Vaid, GM, Server Engineering, Microsoft
By Steve Beason, Enterprise CTO, Scientific Games
By Steve Bein, VP-GIS, Michael Baker International
By Jason Alan Snyder, CTO, Momentum Worldwide
By Jim Whitehurst, CEO, Red Hat
By Alberto Ruocco, CIO, American Electric Power