IT Hardware Selection - One Vendor or Many?
Choosing the right hardware for data centers requires careful examination of various data center infrastructure approaches. The organization’s size, technical requirements, in-house expertise and budget determines if it should choose a less complicated but expensive converged (pre-packaged) hardware which is easy to setup and roll out, or a more flexible approach involving multiple vendors. A converged infrastructure is a vendor-defined set of products delivered as a unit to run a workload. It mostly arrives as a single unit product as per customer’s order. Apart from the organizations requirements, for building a configuration from scratch calls for an IT team with good integration knowledge, particularly for testing.
Pre-Packaged or Different Components
Buying from different vendors and blending those components will be less expensive because this process will help save on CapEx when compared to buying a converged product. Purchasing from different vendors helps evade vendor lock-in and the purchase can be made on the basis of requirements and viability of individual components, considering some level of system integration. Though, this approach increases the installation time.
On the other hand, buying a pre-structured package reduces the time needed to ramp up the hardware while reducing IT management encumbrances. However, handling a converged infrastructure will cost your organization both time and money over the years of operation. The pre-packaged strategy also reaches a saturation zone in the containerized data center when the vendor packages large amounts of pre-tested gear to be delivered and installed, but your IT team lacks hardware integration skills. Thus, the plug-and-play approach looks promising whether it’s an entire data center in a container or a converged infrastructure unit. It also guarantees that the pieces of components will work together, accounted for and it comes along with a basic setup guide for the users.
However, for converged infrastructure, enterprises need to examine tuning and optimization carefully as tuning the hardware might not be favorable to the organization and might deviate from the organizations’ requirements. Furthermore, converged systems are relatively inflexible in terms of configuration–specially in drive-to-system and LAN ports per server.
Currently, converged infrastructures are losing their charm due to high standardization existing in the industry accredited to COTS (commodity off-the-shelf). Current emerging hardware is acquainted to Legos– different elements of the hardware system are highly interchangeable.
Large cloud service providers with their financial brawn can specify the configurations in their containers from the scratch. But small and medium sized IT organizations will be purchasing much fewer systems in comparison; consequently, they won’t have much pull to prescribe. Converged hardware usually come along with a significant premium, which, if considered as an integration cost, is a fair assessment compared to performing integration in-house or having a third-party integrator assemble the unit.
Depending solely on the vendors’ products is not completely advisable, as the industry is shifting to less costly servers from the same vendors that are making gears for cloud providers. These gears have sales in millions of units each year and their quality matches with any other product in the market and are available at low prices. Another challenge to the converged approach is the software-defined movement. With white-box switches, and bare-metal switches and storage, SDN will resolve many integration issues existing in the market, but it has to keep in mind about adhering to inter-communications standards. Modest issues like these support the subsistence of the extra expensive converged products.
Plainly, if you want to avoid the toil of integration, a converged infrastructure sounds more appealing. This stands virtuous for building clusters for ruggedized needs such as military or oil and gas industry. But usually, the real issue is the value and price of pre-integration.
ODMs or Original Design Manufacturers in this case, if used as a baseline, add value to the pre-integration and testing process. Credibly, value-added resellers and integrators also provide value to purchasers, with the result that extra cost is struck down by convenience, expertise and quality of process. Working up a cluster of drives into carriers and ramping up a box is a tough job, not every IT team will have the skills and tools to debug and test the units. ODMs offer pre-integration services too, in a way convenient to their own converged infrastructures. They tend to be flexible on things such as drives and add-in cards. This means that an organization is not tangled to a vendor lock-in the way you might be while dealing with single vendors.
Strikingly, in the yesteryears, Blade servers has been down this path of pre-integration where it aimed to address the same issues, but its inflexibility and high costs prevented them from being widely accepted. Converged products are a better option than blades, but similar issues are present. Eventually, pre-integrated systems are more promising for companies with a little skill in hardware integration. And, also for companies who want to work with a single vendor.
Considering the fact that value propositions are changing and CEO’s still consider cost as a vital mainstay, ODM components will shine in the market for now, but ODM converged systems could change this scenario.
Vendor Lock-in is a permeating matter, especially for data center servers. Self-contained servers arrive with the CPU and supporting chip sets of memory, storage and NICs, letting them communicate with the data center and the outside world. The mainframe is highly proprietary with specialized connectors to deal with expansion–supplementary storage. But, with a custom unit, the only standardization needed is at the NIC level.
Servers communicate using standardized cables and plugs. The problem with self-contained services is their architecture, which prohibits availability. For any part of the server failure, there needs to be a replacement. Hence, data center spends a good amount of money on high availability of the hardware.
Deconstructing data center servers comes with a high proprietary conundrum, as the storage moves to a different environment and Storage area networks (SANs) create a shared pool of highly available storage–this quickness comes with a sizeable price tag though. Very few SANs are compatible with other vendor’s kit, adding to the vendor lock-in subject. Blade computing also addresses the proprietary issue of self-contained data center server. Datacenters can purchase a separate server, storage and network blades and work them up into flexible servers.
There is a shortcoming to this approach. You exactly need to know what and how you are going to walk down the path or it might cause errors in configuration and chassis engineering that cause major hot spots and cooling failures, elevating problems. Each chassis is a proprietary design to the vendor. If the chassis servers don’t meet requirements, maintaining a business relationship with the current vendor would be much cheaper than adding new servers.
When datacenter servers opt for commodity equipment pooled along with resources from a cloud platform, it removes single points of failure. This approach works well for service providers and IT shops with less workloads, and also, when little hardware tuning is required to achieve good performance.
However, when enterprises prefer converged infrastructure servers, an engineered system of components are pre-configured to provide high performance. Examples from well-known data center vendors include VCE Vblock, Cisco UCS, Dell Active Systems, IBM Pure- Flex and HP Converged Systems.
Emerging IT systems should be able to implement fabrications to networks and storage with other vendor’s components like switches, rather than being locked-in to the engineered system vendor.