Mellanox to Assist Cambridge University with their OpenStack Implementations
SUNNYVALE, CA: Mellanox Technologies, a provider of end-to-end interconnect solutions for data center servers and storage system announced that University of Cambridge (UOC) has selected Mellanox end-to-end Ethernet interconnect solution which includes Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables for its OpenStack-based scientific research cloud.
Data has become the main focus of the High Performance Computing (HPC) industry. Firms across the world are harnessing the capabilities of Big Data, and Internet of Things (IoT) in combination with High Performance computing (HPC) to create competitive advantage and opportunities in the research and development arena.
Mellanox’s cutting edge networking solutions will facilitate UOC with HPC and cloud convergence through high-speed cloud networks at 25/50/100Gb/s throughput. UOC’s Research Computing concentrates on domains like HPC, High performance Data analytics along with Web services, IaaS and cloud-based storage models for researchers.
With transformation in the method of delivering the Research Computing Services, UOC is adopting an agile cloud service model based on Openstack. Mellanox's technology enables the university to bring closer the traditional HPC interconnects with Mellanox Infiniband solutions and High Performance Networking with Mellanox RDMA-capable Ethernet and drive convergence.
“The new generation of analytics-based research, with access to unprecedented volumes of data, coupled with the need to provide quick and secure access to computing and storage resources to the research community in and beyond UOC have fueled the momentum behind an OpenStack cloud tailor-made for scientific research,” said Chloe Jian Ma, Senior Director of Cloud Market Development, Mellanox. “Mellanox Ethernet interconnect solution enables UoC to deploy a high-throughput, low-latency cloud network fabric and leverage advanced offload and acceleration capabilities such as SR-IOV, RDMA, and VXLAN Offload that mitigate virtualization penalties.”