'Smart Data Acceleration (SDA) Research Program' to Crack Data Center Issues

By CIOReview | Thursday, September 24, 2015
838
1382
271

SAN LEANDRO, CA: Data Centers today are are constantly being wrought with issues related to performance management, capacity planning, energy consumption, system failure monitoring, legacy reporting system and others. As a solution to these issues, Rambus announces the ‘Smart Data Acceleration (SDA) Research Program’, which explores new architectures for servers and data centers optimized for rack-level Big Data computation. Significant improvements in performance and power efficiency are noticed to suit the proximity of computing and data.

The new patterns of data access have caused traditional server and data center architectures to go beyond balance resulting in low CPU utilization due to shortage of resources. The imbalance between processor cores and memory capacities results in server running out of memory earlier than expected. Consequently, the CPU needs to get through large latency and bandwidth gap to access large data sets from disk subsystem, leading to decreased system performance and power efficiency; and increased Total Cost of Ownership (TCO).

 “Modern servers are out of balance with today’s needs – data centers are under stress due to escalating demands of real-time access to large amounts of information driven by Big Data and new applications,” says Laura Stark, Senior Vice President and General Manager, Emerging Solutions, Rambus.

Rambus SDA Research program is considered to emerge as a key investment for ideal, next generation data centers, as it provides use cases that include real-time risk analytics, ad serving, neural imaging, transcoding and genome mapping.

The system architecture is investigated based on software, firmware, FPGAs and memory. Additionally, the platform acts as a testing environment for new methods that verifies its capacity to optimize and accelerate data analytics for extremely large data sets.

“This research platform focuses on architectures that offload computing closer to very large data sets at multiple points in the memory and storage hierarchy,” adds Stark.