OpenCapi Blog: Post 1

Datacentric Architectures

Molex/Nallatech Leverages OpenCAPI
for 200GBytes/s of Hyperconverged
NVMe Storage Bandwidth
By Allan Cantle

Over the last decade the computing industry has managed to deliver application performance improvements and better energy-efficiency for its customers by embracing parallelism, co-processor type acceleration and techniques to bypass and unburden the CPU. These have worked on the premise of maintaining the CPU centric nature of the server while effectively adding data centric enhancements.

To maintain this rate of incremental improvement, the industry is now embracing many more system level enhancements to the fundamental computing architecture and the CPU is becoming an important member in a fundamentally data centric architecture, rather than being at the heart of that architecture.  With this architectural shift the network fabric is becoming the critical piece at the center and we can see this evidenced by the plethora of new fabric standards including Omnipath, NVLink, OpenCAPI, GenZ, CCIX and Infinity fabric to name a few. Each of these fabrics claim to either solve a piece of or all the communication requirements for future data centric architectures.

OpenCAPI is enjoying the early mover advantage as an excellent open standard conduit, both metaphorically and physically, in facilitating this data centric industry shift. This becomes even more important when you realize that the industry cannot leave behind CPU centric legacy software that will need to continue running for many decades to come.

It is critical to understand that OpenCAPI is singularly focused on being the best coherent, low latency and high bandwidth (25GBytes/S Tx & 25GBytes/s Rx) interconnect for the hyperconvergence of data centric architectural pieces within a node. Consequently, it is looking for a complimentary fabric to support the ingress and egress of data to and from the node. This will be a topic for a later blog.  OpenCAPI based hyperconverged solutions must also become more programmable in a similar vein to those developed earlier on CAPI such as CAPI SNAP, Storage Networking & Acceleration Programming, and framework.

Nallatech is a pioneer of data centric computing using FPGAs, where computational functions are built around flowing data streams. It has 24 years of experience in successfully helping customers to migrate and deploy data centric heterogeneous architectures featuring FPGA technology. OpenCAPI was designed to leverage the strengths of FPGA architectures and minimize the impact of their weaknesses. Figure 1 shows a block diagram of Nallatech’s perspective of how the OpenCAPI bus is at the heart of enabling the true emergence of data centric architectures.

IMG-OpenCapi-Blog-Post1

Figure 1 OpenCAPI enabling Data Centric architectures through a Hyperconverged & Disaggregatable Architecture

Critical to this industry transformation is the open collaborations of all the industries experts with their differing skillsets. This openness, especially at the interface level, will help to ensure that the best ideas win out and that everyone can innovate around these new standards to deliver the best solutions to the industries customer base including the essential software infrastructure stacks that will make this technology easily accessible to application developers.

With Nallatech’s data centric heritage, Molex & Nallatech are taking decades of experience in tackling complex data centric problems.  These include HPDA applications such as video analytics & AI to classical memory bound HPC problems like the seismic migration algorithms.  These new system level solutions, based around OpenCAPI, will deliver over 5x performance gains at power levels that realistically begin to approach the DOEs 20MW Exascale target.

Additionally Nallatech will leverage OpenCAPI to ensure that valuable memory resources can be effectively shared with the CPU without breaking the essential support of the legacy CPU centric code base.

Come by the OpenCAPI, Molex & Nallatech booths #1587-#1589, #1263 & #1362 where we will be showcasing how our Sawmill FSA (Flash Storage Accelerator) development platform brings up to 200GBytes/s of hyperconverged accelerated storage to the Google/Rackspace Zaius/Barreleye-G2 POWER9 OCP Platform. The Sawmill FSA is designed to natively support the benefits of OpenCAPI by providing the lowest possible latency and highest bandwidth to NVMe Storage with the added benefits of OpenCAPI Flash functionality and near storage FPGA acceleration. HPDA applications such as graph analytics, in-memory databases and bioinformatics are expected to benefit greatly from this platform.

2017-11-13T06:32:06+00:00

Support Lounge Login

Forgot Password?

Join Us

Password Reset
Please enter your e-mail address. You will receive a new password via e-mail.