IBM Steals The Show at GPU Technology Conference
SAN JOSE, CA: The GPU Technology Conference, hosted by Chip maker Nvidia in San Jose will have enterprise technology firm IBM showing off the ‘GPU-accelerated machine’ for data clustering. It claims that injecting of GPU technology in data analytics can beat major existing barriers. According to the Nvidia, using GPU will radically bring down the time in generation of insights and better equipped to perform difficult analytics in a cost efficient manner.
Speaking about IBM’s product, Sumit Gupta, Product Management Executive of Nvidia, explains that a computational technique called segmentation or clustering is deployed in this machine. It identifies non-obvious patterns in data by analyzing hundreds of different dimensions. For instance, it can be used by retailers to group their customers into segments with similar behavior and create customized products and come up with more effective target marketing programs.
Lee Bell for the Inquirer reports that the exhibit from IBM— a GPU-accelerated machine for data clustering; uses open-source software frameworks Hadoop and Mahout. It claims to allow retailers, entertainment websites and internet companies to make precise and apt suggestions for new products and services.
"IBM is demonstrating the use of GPU accelerators on a distributed computing system, required for such an enormous data set, for clustering using Hadoop. With GPU accelerators working alongside IBM Power CPUs, the demo runs eight times faster than with a power system without GPUs," Gupta adds.
In the same conference, Nvidia and IBM announced the development of an interconnect—NVLink that will be integrated into future graphics processing units. It has been designed to ensure faster data flow between CPU and GPU.
The next-generation of GPUs from Nvidia, due in 2016 is named Pascal GPU architecture and comes in one-third the size of the standard boards used today. Three key new features of this are: Stacked Dynamic Random Access Memory, Unified Memory, and NVLink.
Stacked DRAM chips accesses data from memory more quickly–boosting throughput and efficiency–allowing the building of more compact GPUs. It will provide several times greater bandwidth, more than twice the memory capacity and energy efficiency.
Unified Memory will take advantage of what both GPUs and CPUs can do quicker and easier. It gives CPU access to GPU, removing the necessity of allocating resources between the two. NVLink allows accelerated data to flow, more than 80GB per second, compared to the 16GB per second available now.
Check out: About CIOReview on Muck Rack
By Leni Kaufman, VP & CIO, Newport News Shipbuilding
By George Evans, CIO, Singing River Health System
By John Kamin, EVP and CIO, Old National Bancorp
By Elliot Garbus, VP-IoT Solutions Group & GM-Automotive...
By Gregory Morrison, SVP & CIO, Cox Enterprises
By Alberto Ruocco, CIO, American Electric Power
By Sam Lamonica, CIO & VP Information Systems, Rosendin...
By Sergey Cherkasov, CIO, PhosAgro
By Pascal Becotte, MD-Global Supply Chain Practice for the...
By Stephen Caulfield, Executive Director, Global Field...
By Shamim Mohammad, SVP & CIO, CarMax
By Ronald Seymore, Managing Director, Enterprise Performance...
By Brad Bodell, SVP and CIO, CNO Financial Group, Inc.
By Jim Whitehurst, CEO, Red Hat
By Clark Golestani, EVP and CIO, Merck
By Scott Craig, Vice President of Product Marketing, Lexmark...
By Dave Kipe, SVP, Global Operations, Scholastic Inc.
By Meerah Rajavel, CIO, Forcepoint
By Amit Bahree, Executive, Global Technology and Innovation,...
By Greg Tacchetti, CIO, State Auto Insurance