Organizations are embracing data as a powerful tool to enhance operational efficiency and subsequent business growth. Through the effective integration of big data and the utilization of analytics to extract insights, they can mold an exciting future of new business models or services. Big data is omnipresent with a myriad of benefits—from use case in fraud analytics in the government sector, sales strategies in the retail industry, to patient-centric care in the healthcare sector. As we speak of how data is driving business disruption, data is exploding with information pouring in from different digital platforms. According to IDC, the global data sphere—the amount of data we create, capture, replicate, and consume—is expected to grow to 175 Zettabytes in 2025. On the other hand, a study by the Gartner Group in 2018 shows that the data growth has far out-stripped compute growth resulting in an imbalance in system architectures. Not much has changed for computer architectures from what they were 50 years ago. No one could have predicted then the vast data sets the organizations employ today. The archaic model of moving mountains of data in and out of CPUs is not feasible anymore.
The current hardware system struggles to handle cache-unfriendly applications that require access to massive amounts of data, such as sparse math operations, data analytics, and graph algorithms. For instance, big data analytics need access to extensive data with no predictable patterns and radically different needs for computational resources. This is due to the irregular memory access patterns demonstrated by these applications and how remote memory accesses are performed. The cluster technology solutions available in today’s marketplace focus on synthesizing or reducing data to fit within the conventional cache design.
This ‘fast data’ approach of considering sample data instead of the whole data set introduces bias and fails to represent the complete truth.
Founded by industry veterans with deep expertise, New York-based Lucata Corporation effectively recognized these data-intensive challenges and the shortcomings of conventional computers. The company approached the problem through a new lens— why do we move data when we don’t have to? Turning the existing computing architecture upside down, Lucata has successfully developed a groundbreaking, scalable architecture that takes computing to the data instead of moving data across the network. “Our architecture access data through narrow memory channels and migrating threads, eliminating eighty percent of inter-node communication, increasing efficiency to handle massive data sets, and providing unlimited in-memory capability through a single shared memory system,” says Michael Maulick, President and CEO of Lucata Corporation. The company’s design enables querying of the entire dataset without traversing each data point, thereby removing the compromises of ‘fast data.’ It facilitates breakthroughs in the analysis of large datasets, such as big data applications, by reducing the extensive pre-processing of data that occurs to make complex queries possible on conventional clusters. This brings orders of magnitude increase in performance and scalability over today’s cluster and GPU technology.
Explaining the architecture’s functioning, Maulick sketches a real-world scenario representing conventional computers.
Consider an e-retailer with numerous warehouses across the U.S. but has only one shipping location. To find a product, the e-retailer has to send a request to all warehouses and wait for the response. Once the warehouse with the product is identified, the whole rack of products is sent to the central shipping location, similar to moving required data to the processor along with unnecessary data used to fill cache lines. This highly inefficient process can be effectively simplified using Lucata’s architecture. With the company’s architecture as the e-retailer, it can identify the right warehouse with the required product and ship it directly from there instead of bringing it to the shipping location.
Our architecture access data through narrow memory channels and migrating threads, eliminating eighty percent of inter-node communication, increasing efficiency to handle massive data sets and providing unlimited in-memory capability through a single shared memory system
Revolutionary Innovation Coupled with Practical Programming
Powering Lucata’s innovative solutions is its multi-threading technology, coupled with a shared memory single system image design that does not rely on caches or buses. This scalable system consists of nodes and memory in a single shared pool with unlimited in-memory capability, unlike conventional computers. With Lucata’s migratory thread architecture, reading a memory location on a different node causes the program context to move to the node containing the data (at the locale of the reference) through a narrow memory channel instead of sending a read across the network. This proves to be a very efficient approach when more than one reference occurs at a locale.
Lucata’s threads improve overall network utilization as they never stall for long periods waiting for remote reads. Further, with one-way network traffic, it simplifies the network and reduces wastage of bytes by eliminating the need for round trip read and response messages, which is in contrast to cluster technology's four-way packet communication that heavily uses network bandwidth and results in slow execution. Remote writes can be performed directly or via migrations, under programmer (compiler) control. This eliminates the need for cache and cache coherency. The data is still imported to the system to cleanse but not arbitrarily reduced, delivering potentially less skew results. Such a process of streamlining opens up more time for the data scientists to explore insights from a range of algorithms.
Along with this revolutionary technology innovation, Lucata wanted to provide a software ecosystem that's easy to program and chose a scalable shared-memory single system. The environment combined with the migratory thread technology overcomes the performance-bound cache coherency cost and complexity overhead limitations of previous generations. Even though a new computer architecture, Lucata’s system run in industry-standard Linux with a set of libraries and a compiler containing only three additional commands to C++. Unlike programming GPUs with endless instructions, everything is handled by the hardware, and only the additional C++ commands are needed. Lucata also provides an API to tap performance-optimized parallel C++ or OpenMP.
Lucata’s architecture is ideal for multiple industries, including threat intelligence, personalized medicine, fraud detection, and machine learning across multiple industries. Its scalability also opens up a possible easy road towards the future of advanced technologies such as quantum computing. Lucata provides fraud detection solutions to finance industry clients, helping them identify fraudulent transactions. Enabled by graph-type technology, the solutions weed out these transactions within a few milliseconds, which is impossible for any system available in the marketplace today. Currently, Lucata serves the intelligence community in the U.S. to harness the power of data for determining unknown relationships. For instance, these solutions can enable blockchain monitoring. Consider massive amounts of money being deposited to a blockchain ID. By monitoring the transactions, the Lucata system can determine whether it’s an illicit drug group and identify a characteristic that helps the intelligence community go after them. “We empower our clients to operate on the whole data set, without introducing any bias, to find those unknown relationships, increasing operational efficiencies, process optimization, and predictive analytics,” remarks Maulick.
The Culmination of Extensive Research through Years
Lucata’s architecture is the result of an extraordinary vision and relentless effort by its nationally-recognized founders Dr. Peter Kogge, Dr. Jay Brockman, and Dr. Ed Upchurch. To tackle the data-intensive challenges, they brought onboard world-renowned computer architects. Leveraging their deep expertise in the computing world, Lucata succeeded in bringing together the technologies otherwise disintegrated and combining them to achieve a novel approach with a myriad of benefits, which many industry leaders had failed to achieve. “Most of the time there are elements that already exist and a new catalyst can put multiple innovations together and commercialize it. And that’s what we did. We knew the pain points through our years of experience, and together we were able to put the pieces and complete the puzzle,” remarks Maulick. Innovation at its core, Lucata’s expert team has been developing an Exascale-capable computing architecture designed specifically to tackle the big data applications that are choking today’s supercomputers. The company currently holds numerous patents and pending patents for its proprietary technologies explicitly designed to address data-intensive, real-time big data analytics.
With unprecedented capabilities and ingenious solutions, Lucata has carved a unique position for itself in the computing architecture space. For the future, the company plans to introduce the latest version of its product (which is ten times faster) to the commercial world, especially the finance industry, by January 2021. Without a doubt, Lucata is poised to reinvent the world of computing.
Intel Joins Forces with Lucata
“Intel has partnered with Lucata to use our Stratix-10 FPGAs as the critical computing silicon enabling Lucata’s unique system architecture. Intel has moved beyond pure CPUs to build a portfolio of XPUs enabling heterogeneous computing architectures to solve a new class of problems. Lucata has achieved a hardware solution that scales to data and problem set sizes that were impractical to tackle previously. Several businesses and agencies who have tested the prototypes are eager to put the equipment to use. We are proud of the partnership with Lucata and the benefits this will bring to the industry,” Greg Ernst, EVP of PSG, Intel.