About FPGASolutions

This author has not yet filled in any details.
So far FPGASolutions has created 26 blog entries.

Nallatech Launches 250 Series of NVMe Acceleration Solutions

Nallatech Launches 250 Series of NVMe Storage Acceleration Solutions

250 Series FPGA Products

CAMARILLO, California – March 19, 2018 – Nallatech, a Molex Company, a leading supplier of high-performance FPGA solutions, announces availability of the 250 family of Accelerated Storage Solutions featuring Xilinx UltraScale+ FPGA and MPSoC technology.

“FPGAs are being deployed across a range of on-premise storage platforms and cloud infrastructure to achieve a step-change in application performance and energy-efficiency,” said Craig Petrie, vice president business development of FPGA solutions at Nallatech. “Our collaboration with Xilinx has delivered a family of innovative storage products capable of accelerating common functions such as erasure coding, deduplication, encryption and compression.  These products adhere to PCIe and U.2 form factors allowing them to be easily integrated into data center infrastructure.”

“We’re pleased that Xilinx UltraScale+ FPGAs and MPSoCs are at the core of Nallatech’s new family of accelerated storage products,” said Manish Muthal, vice president of data center business at Xilinx. “Packaging disruptive technology in this way allows customers to easily and rapidly deploy Xilinx solutions, and to take advantage of the dramatic benefits of Xilinx technology in a cost-effective manner.”

The Nallatech 250 series comprises of three core products:

250S+ — A fully-programmable NIC-sized near-storage accelerator featuring a Xilinx Kintex UltraScale+ FPGA. This PCIe Gen 4-capable accelerator card can be added to PCIe or CAPI-enabled server platforms introducing an energy-efficient acceleration capability for applications including database acceleration, in-line compression/encryption, checkpoint restarting and burst buffer caching. The 250S+ is available with a choice of two configurations. The first provides up to four M.2 NMVe SSDs coupled on-card to the Xilinx FPGA. The second offers an innovative break-out option using OCuLink cabling to allow the 250S+ to be part of a massively scaled storage array.

250-U2 — Adhering to the U.2 form factor, this fully-programmable accelerator features a Xilinx Kintex UltraScale+ FPGA and local DDR4 SDRAM memory. This energy-efficient, flexible compute node is intended to be deployed within conventional U.2 NVMe storage arrays (approximately 1:8 ratio) allowing FPGA-accelerated instances of erasure coding, deduplication and compression to boost overall system performance. The 250-U2 is available as a fully-programmable device for customers preferring to develop and deploy their own application codes.

250-SoC — The 250-SoC enables the creation of remote, disaggregated storage or Ethernet Just-a-Bunch-of-Flash (EJBOF) which dramatically reduces the storage cost, footprint and power within data centers. A Xilinx Zynq UltraScale+ MPSoC device featuring both FPGA fabric and 64-bit ARM processors coordinates data transfer between two 100GbE network ports, onboard DDR4 memory and a PCIe Gen 4 host interface. Optional OCuLink ports allow the NIC-sized accelerator to be part of a massively scaled storage array. The 250-SoC is available either fully-programmable or as a pre-programmed solution featuring Xilinx’s NVMe-over-Fabric IP. This optimized design implements the NVM Express-over-Fabrics protocol offload and RDMA NIC protocol. This turnkey solution provides reliable transport of NVMe frames with low latency, high throughput, and massive scalability to remote hosts.

Please visit www.nallatech.com/storage for additional information.

About Nallatech
Nallatech, a Molex company, is a leading supplier of accelerated computing solutions. Nallatech has deployed several of the world’s largest FPGA hybrid computer clusters and is focused on delivering scalable solutions that deliver high performance per watt, per dollar. www.nallatech.com

About Molex
Molex brings together innovation and technology to deliver electronic solutions to customers worldwide. With a presence in more than 40 countries, Molex offers a full suite of solutions and services for many markets, including data communications, consumer electronics, industrial, automotive, commercial vehicle and medical. www.molex.com

Nallatech Launches 250 Series of NVMe Acceleration Solutions 2018-03-19T08:50:49+00:00

High Frequency Trading – Get the competitive edge with Nallatech FPGAs

High Frequency Trading – Get the competitive edge with Nallatech FPGAs

Nallatech’s Craig Petrie explains how financial trading can gain the competitive edge with the latest Intel Stratix 10 FPGA technology.

View All Nallatech FPGA Cards

FPGA Accelerated Compute Node
FACN

FPGA Accelerated Compute Node – with up to (4) 520s

520 – with Stratix 10 FPGA
520

Compute Accelerator Card
w/Stratix 10 FPGA

510T - Compute Accelerator with Arria 10 FPGA
510T

 Nallatech 510T
w/(2) Arria 10  FPGAs

385A - Network Accelerator with Arria 10 FPGA
385A

 Nallatech 385A – w/Arria10 / GX1150 FPGA

High Frequency Trading – Get the competitive edge with Nallatech FPGAs 2018-02-27T08:57:54+00:00

OPERA Project – Improving computational energy efficiency

OPERA Project – Improving computational energy efficiency through low power consumption systems

OPERA Project – LOw Power Heterogeneous Architecture for NExt generation of SmaRt infrastructure and Platforms in Industrial and Societal Applications. The OPERA project is co-funded by the European Union’s HORIZON 2020 Framework Programme for Research and Innovation. A new generation of low power consumption systems to improve computational energy efficiency through the development of heterogeneous architectures, distributing the workload according to applications and server technology.

View All Nallatech FPGA Cards

FPGA Accelerated Compute Node
FACN

FPGA Accelerated Compute Node – with up to (4) 520s

520 – with Stratix 10 FPGA
520

Compute Accelerator Card
w/Stratix 10 FPGA

510T - Compute Accelerator with Arria 10 FPGA
510T

 Nallatech 510T
w/(2) Arria 10  FPGAs

385A - Network Accelerator with Arria 10 FPGA
385A

 Nallatech 385A – w/Arria10 / GX1150 FPGA

OPERA Project – Improving computational energy efficiency 2018-04-11T06:52:49+00:00

Any Rate, Any Format: Accelerating Kafka Producers with FPGAs

Any Rate, Any Format: Accelerating Kafka Producers with FPGAs

Nalllatech Whitepaper – Accelerating Kafka Producers with FPGAs

Introduction – Accelerate Kafka Producers with FPGAs

Apache Kafka is at the heart of emerging universal streaming data pipeline. Kafka’s has many high-profile adoptions as the streaming platform of choice being used at LinkedIn, Netflix, Uber, ING along with over one third of the Fortune 500 and growing. At LinkedIn, approximately two trillion messages per day pass through Kafka. According to TechRepublic.com, six of top 10 travel companies, seven of top 10 global banks, eight of the top 10 insurance companies and nine of top 10 telecom companies have adopted Kafka as the central platform for managing streaming data. At the 2017 New York Kafka Summit, Confluent reported over one third of the Fortune 500 have deployed Kafka.

Basic Kafka System

Kafka has three essential components – producers, brokers and consumers. Producers publish data to topics on brokers and consumers subscribe to topics. Figure 1 shows a basic Kafka system.

Figure 1 - Basic Kafka System

Figure 1 – Basic Kafka System

One of many the advantages of the Kafka architecture is the decoupling of producers and consumers. Producers and consumers can be at wildly different data rates and yet have no effect on each other. The other key advantage of Kafka is its small size. With just over 90,000 lines of code, Kafka clusters can be implemented on much more modest hardware requirements than Spark Streaming which requires a full Spark node.

Accelerating Kafka Producers

Data ingest into big data systems ranges from simple to complex. In figure 2, data source 1 may be a packet captures of network traffic. However, data source two could be complex geospatial images from a constellation of satellites, while data source three is industrial IoT maintenance data on a windmill farm in West Texas.

Figure 2 : Streaming Data Ingest Acceleration with Intel FPGAs

Figure 2 : Streaming Data Ingest Acceleration with Intel FPGAs

The variability in data formats and data rates makes the problem difficult to scale. Being able to adapt in real-time to burst in traffic and new formats is often costly requiring provisioning of additional NICs and processors. Figure 3 shows a typical processor based architecture used in most Kafka clusters.

Figure 3 : Typical Ingest Pathway

Figure 3 : Typical Ingest Pathway

Data rate variability makes the system in figure 3 difficult to plan. In many cases, the maximum bandwidth must be estimated and then provisioned. 50% or more excess processors and NICs will be idled waiting for increases in data rates.

Moving to an Intel FPGA based solution, the same maximum bandwidth will be estimated, but the simplified system in figure 4 will have much lower power while idle and requires considerable less footprint overall. The system in figure 2 will also eliminate flow control and load balance management needed for processor based system because the Intel FPGA based approach is deterministic regardless of data rate or data formats.

Intel FPGAs are streaming, parallel accelerators that attach directly to copper, fiber & optical wires. Unlike traditional GPUs and CPUs, Intel FPGAs can move any data in any format from wire to memory in nanoseconds without the need of a Network Interface Card (NIC).

This acceleration of ingest can result in 40X lower latency in data ingest to Kafka producer. It provides the option for simultaneous real-time processing of the inflowing data such as by implementing machine learning, image recognition, pattern matching, filtering, compression, encryption etc. Ingested data can be therefore accelerated and enriched to speed time to data acquisition and data analysis.

Use Case One:
Inline Extract & Transformation 

The most basic use case for FPGA ingest into a Kafka producer is shown in figure 4. Even for this most basic use case, the FPGA provides low latency and determinism for even extremely variable rates. The ability to extract and transform the data with OpenCL allows this use case to handle 10s to 100s of data types.

Figure 4 Inline, Low Latency, Deterministic, Extraction & Transformation

Figure 4 Inline, Low Latency, Deterministic, Extraction & Transformation

Use Case Two:
Inline Encryption & Decryption

Encryption is extremely expensive in processor cycle, but well understood on Intel FPGAs. FPGAs provide a low latency and deterministic result without a dependency on the data rate. For processors, variable data rates could flood the processor resources and cause a bottleneck and/or start dropping packet.

Figure 5 Inline, Low Latency, Deterministic, Encryption or Decryption

Figure 5 Inline, Low Latency, Deterministic, Encryption or Decryption

Use Case Three:
Inline Compression & Decompression

FPGA’s are extremely efficient at compression and decompression. In this use case the FPGA is used to compress/decompress data before it is passed to the Kafka system.

Figure 6 Inline, Low Latency, Deterministic Compression or Decompression

Figure 6 Inline, Low Latency, Deterministic Compression or Decompression

Use Case Four:
Information Theory with Encrypted/Decrypted &
Compressed/Decompressed Streams

Shannon’s law is being applied to more streaming use cases to determine if a stream is encrypted. Shannon’s law calculates the entropy of a packets looking for randomness versus structured bytes. Many encrypted bytes look, but not all, similar structured data. Figure 7 shows a possible flow to calculate the entropy, attempt to decrypt and then decompress before being published to a Kafka topic. Even if the decryption and/or decompress could not be done successfully, sorting encrypted vs decrypted streams has many applications in industries, such as personal identifiable information like finance and health care.

Use Case Five:
Enriched Topic Routing

Figure 8 Enriched Topic Routing of PCAPs for Cyber Analytics

Figure 8 Enriched Topic Routing of PCAPs for Cyber Analytics

Kafka’s flexible topic architecture that allows ingested data to be placed into many topics. This flexibility means incoming data can be routed/switched using machine learning and pattern matching. Take figure 9 above which shows raw network packets being captured (PCAPS). As the packets are captured, complex pattern matching using PCRE expressions can route to the appropriate topics. This allows the Kafka consumers to subscribe to enriched topics and bypass a cleaning stage. For many cyber analytics applications, the processing realizes a 1000X improvement in cyber operations per watt based on research published by DOE Sandia & Lewis Rhodes Labs.

Nallatech 385A Cloudera/Intel Example

The Nallatech 385A provides two network ports supporting up to 40Gbe/sec each. This NIC size card can replace existing NIC/CPU combination to significantly accelerate existing Kafka networks and reduce power.

This has been verified by Cloudera and Intel to accelerate Kafka to Spark streaming, whilst performing data enrichment on the FPGA (Figure 9).

Figure 9 Enriched data using 385A

Figure 9 Enriched data using 385A

In the above demonstration, we have chosen engine noise signatures as our input data stream. They are ingested and offloaded via an UDP offload engine and placed into the card’s OpenCL environment. OpenCL code running on the card performs real-time formatting on the incoming data stream. It then performs an FFT, feature extraction and classifies the signal as “normal” or “abnormal” based on comparison with known engine signatures. This extra bit of data along with the FFT of the engine signals are DMA into Kafka for further processing.

This example also highlights the flexibility of OpenCL generated libraries which can be applied to incoming streaming data. This offers then end user immense latitude to include very application specific forms of data enrichment or data filtering.

520N: 100 Gbe with Stratix10

The Nallatech 520N four network ports enable support for an array of serial I/O protocols operating up at 10/25/40/100Gz. With a total throughput of up to 400 Gbe/sec, the 520N is cable of enriching high volumes of data prior to offloading to a Kafka framework.

Figure 10 Enriched data using 520N

Figure 10 Enriched data using 520N

The 520N is populated with the powerful Stratix 10 FPGA offering unparalleled performance.
With the combination of high throughput, large amounts of compute and programmability using OpenCL, it is possible to perform complex data enrichment on streaming data on a single device.

More Information and How to Evaluate

Nallatech along with Intel PSG are experts at Kafka acceleration. Nallatech has current and planned products to accelerate Apache Kafka using Arria 10 and Stratix 10 FPGAs. Please contact Nallatech to discuss your needs and develop an accelerated solution.

View All Nallatech FPGA Cards

FPGA Accelerated Compute Node
FACN

FPGA Accelerated Compute Node – with up to (4) 520s

520 – with Stratix 10 FPGA
520

Compute Accelerator Card
w/Stratix 10 FPGA

510T - Compute Accelerator with Arria 10 FPGA
510T

 Nallatech 510T
w/(2) Arria 10  FPGAs

385A - Network Accelerator with Arria 10 FPGA
385A

 Nallatech 385A – w/Arria10 / GX1150 FPGA

Any Rate, Any Format: Accelerating Kafka Producers with FPGAs 2017-11-14T08:04:01+00:00

FPGA Acceleration of Convolutional Neural Networks

Nalllatech Whitepaper – FPGA Accelerated CNN

Introduction – CNN – Convolutional Neural Network

Convolutional Neural Networks (CNNs) have been shown to be extremely effective at complex image recognition problems. This white paper discusses how these networks can be accelerated using FPGA accelerator products from Nallatech, programmed using the Intel OpenCL Software Development Kit. This paper then describes how image categorization performance can be significantly improved by reducing computation precision. Each reduction in precision allows the FPGA accelerator to process increasingly more images per second.

Caffe Integration

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center and by community contributors.

The Caffe framework uses an XML interface to describe the different processing layers required for a particular CNN. By implementing different combinations of layers a user is able to quickly create a new network topology for their given requirements.

The most commonly used of these layers are:
• Convolution: The convolution layer convolves the input image with a set of learnable filters, each producing one feature map in the output image.
• Pooling: Max-pooling partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum value.
• Rectified-Linear: Given an input value x, The ReLU layer computes the output as x if x > 0 and negative_slope * x if x <= 0.
• InnerProduct/Fully Connected: The image is treated as single vector with each point contributing to each point of the new output vector

By porting these 4 layers to the FPGA, the vast majority of forward processing networks can be implemented on the FPGA using the Caffe framework.

Figure 1 : Example illustration of a typical CNN - Convolutional Neural Network
Figure 1 : Example illustration of a typical CNN – Convolutional Neural NetworkTo access the accelerated FPGA version of the code the user need only change the description of the CNN layer in the Caffe XML network description file to target the FPGA equivalent.

AlexNet

Figure 2 : ImageNet CNN - Convolutional Neural Network

Figure 2 : AlexNet CNN – Convolutional Neural Network

AlexNet is a well know and well used network, with freely available trained datasets and benchmarks. This paper discusses an FPGA implementation targeted at the AlexNet CNN, however the approach used here would apply equally well to other networks.

Figure 2 illustrates the different network layers required by the AlexNet CNN. There are 5 convolution and 3 fully connected layers. These layers occupy > 99% of the processing time for this network. There are 3 different filter sizes for the different convolution layers, 11×11, 5×5 and 3×3. To create different layers optimized for the different convolution layers would be inefficient. This is because the computational time of each layer differs depending upon the number of filters applied and the size of the input images. due to the number of input and output features processed. However, each convolution requires a different number of layers and a different number of pixels to process. By increasing the resource applied to more compute intensive layers, each layer can be balanced to complete in the same amount of time. Hence, it is therefore possible to create a pipelined process that can have several images in flight at any one time maximizing the efficiency of the logic used. I.e. most processing elements are busy most of the time.

Table 2 : ImageNet layer computation requirements when using 3x3 filters

Table 1 : ImageNet layer computation requirements

Table 1 shows the computation required for each layer of the Imagenet network. From this table it can be seen that the 5×5 convolution layer requires more compute than the other layers. Therefore, more processing logic for the FPGA will be required for this layer to be balanced with the other layers.

The inner product layers have a n to n mapping requiring a unique coefficient for each multiply add. Inner product layers usually require significantly less compute than convolutional layers and therefore require less parallelization of logic. In this scenario it makes sense to move the Inner Product layers onto the host CPU, leaving the FPGA to focus on convolutions.

FPGA logic areas

FPGA devices have two processing regions, DSP and ALU logic. The DSP logic is dedicated logic for multiply or multiply add operators. This is because using ALU logic for floating point large (18×18 bits) multiplications is costly. Given the commonality of multiplications in DSP operations FPGA vendors provided dedicated logic for this purpose. Intel have gone a step further and allow the DSP logic to be reconfigured to perform floating pointer operations. To increase the performance for CNN processing it is necessary to increase the number of multiplications that be implemented in the FPGA. One approach is to decrease the bit accuracy.

Bit Accuracy

Most CNN implementations use floating point precision for the different layer calculations. For a CPU or GPGPU implementation this is not an issue as the floating point IP is a fixed part of the chip architecture. For FPGAs the logic elements are not fixed. The Arria 10 and Stratix 10 devices from Intel have embedded floating DSP blocks that can also be used as fixed point multiplications. Each DSP component can in fact be used as two separated 18×19 bit multiplications. By performing convolution using 18 bit fixed logic the number of available operators doubles compared to single precision floating point.

Figure 3 : Arria 10 floating point DSP configuration

Figure 3 : Arria 10 floating point DSP configuration

If a reduced precision floating point processing is required it is possible to use half precision. This requires additional logic from the FPGA fabric, but doubles the number of floating point calculations possible, assuming the lower bit precision is still adequate.

One of the key advantages of the pipeline approach described in this white paper is ability to vary accuracy at different stages of the pipeline. Therefore, resources are only used where necessary, increasing the efficiency of the design.

Figure 4 : Arria 10 fixed point DSP configuration


Figure 4 : Arria 10 fixed point DSP configuration

Depending upon the CNNs application tolerance, the bit precision can be reduced further still. If the bit width of the multiplications can be reduced to 10 bits or less, (20 bit output) the multiplication can then be performed efficiently using just the FPGA ALU logic. This doubles the number of multiplications possible compared to just using the FPGA DSP logic. Some networks maybe tolerant to even lower bit precision. The FPGA can handle all precisions down to a single bit if necessary.

For the CNN layers used by AlexNet it was ascertained that 10 bit coefficient data was the minimum reduction that could be obtained for a simple fixed point implementation, whilst maintaining less than a 1% error versus a single precision floating point operation.

CNN convolution layers

Using a sliding window technique, it is possible to create convolution kernels that are extremely light on memory bandwidth.

Figure 5 : Sliding window for 3x3 convolution

Figure 5 : Sliding window for 3×3 convolution

Figure 5 illustrates how data is cached in FPGA memory allowing each pixel to be reused multiple times. The amount of data reuse is proportional to the size of the convolution kernel.

As each input layer influences all output layers in a CNN convolution layer it is possible to process multiple input layers simultaneously. This would increase the external memory bandwidth required for loading layers. To mitigate the increase all data, except for coefficients, is stored in local M20K memory on the FPGA device. The amount on chip memory on the device limits the number of CNN layers that can be implemented.

Figure 6 : OpenCL Global Memory Bandwidth (AlexNet)

Figure 6 : OpenCL Global Memory Bandwidth (AlexNet)

Most CNN features will fit within a single M20K memory and with thousands of M20Ks embedded in the FPGA fabric, the total memory bandwidth available for convolution features in parallel is in the order of 10’s Terabytes/sec.

Figure 7 : Arria 10 GX1150 / Stratix 10 GX2800 resources

Figure 7 : Arria 10 GX1150 / Stratix 10 GX2800 resources

Depending upon the amount of M20K resource available it is not always possible to fit a complete network on a single FPGA. In this situation, multiple FPGA’s can be connected in series using high speed serial interconnects. This allows the network pipeline to be extended until sufficient resource is available.
A key advantage to this approach is it does not rely on batching to maximize performance, therefore the latency is very low, important for latency critical applications.

Figure 8 : Extending a CNN Network Over Multiple FPGAs

Figure 8 : Extending a CNN Network Over Multiple FPGAs

Balancing the time taken between layers to be the same requires adjusting the number of parallel input layers implemented and the number of pixels processed in parallel.

Figure 9: Resources for 5x5 convolution layer of Alexnet

Figure 9: Resources for 5×5 convolution layer of Alexnet

Figure 9 lists the resources required for the 5×5 convolution layer of Alexnet with 48 parallel kernels, for both a single precision and 16 bit fixed point version on an Intel Arria10 FPGA. The numbers include the OpenCL board logic, but illustrate the benefits of lower precision has on resource.

Fully Connected Layer
Processing of a fully connected layer requires a unique coefficient for each element and therefore quickly becomes memory bound with increasing parallelism. The amount of parallelism required to keep pace with convolutional layers would quickly saturate the FPGA’s off chip memory, therefore it is proposed that his stage of the input layers either batched or pruned.

As the number of elements for an inner product layer is small the amount of storage required for batching is small versus the storage required for the convolution layers. Batching layers then allows the same coefficient to be used for each batched layer reducing the external memory bandwidth.

Pruning works by studying the input data and ignoring values below a threshold. As fully connected layers are placed at the later stages of a CNN network, many possible features have already been eliminated. Therefore, pruning can significantly reduce the amount of work required.

Resource
The key resource driver of the network is the amount of on chip M20K memories available to store the outputs of each layer. This is constant and independent of the amount of parallelism achieved. Extending the network over multiple FPGA’s increases the total amount of M20K memory available and therefore the depth of the CNN that can be processed.

Conclusion
The unique flexibility of the FPGA fabric allows the logic precision to be adjusted to the minimum that a particular network design requires. By limiting the bit precision of the CNN calculation the number of images that can be processed per second can be significantly increased, improving performance and reducing power.

The non-batching approach of FPGA implementation allows single frame latency for object recognition, ideal for situations where low latency is crucial. E.g. object avoidance.

Using this approach for AlexNet (single precision for layer 1, then using 16 bit fixed for remaining layers), each image can be processed in ~1.2 milliseconds with a single Arria 10 FPGA, or 0.58 milliseconds with two FPGAs in series.

View All Nallatech FPGA Cards

FPGA Accelerated Compute Node
FACN

FPGA Accelerated Compute Node – with up to (4) 520s

520 – with Stratix 10 FPGA
520

NEW – Compute Accelerator Card
w/Stratix 10 FPGA

510T - Compute Accelerator with Arria 10 FPGA
510T

 Nallatech 510T
w/(2) Arria 10  FPGAs

385A - Network Accelerator with Arria 10 FPGA
385A

 Nallatech 385A – w/Arria10 / GX1150 FPGA

FPGA Acceleration of Convolutional Neural Networks 2018-04-05T09:49:46+00:00

Nallatech exhibiting at SuperComputing 17

Nallatech Showcases Next Generation FPGA Accelerators at Supercomputing 2017

Leaders in FPGA AccelerationVisit booth 1362 for Machine Learning and Kafka Data Ingest case studies using latest generation of FPGA accelerators and tools

LISLE, IL – November 13, 2017 – Nallatech, a Molex company, will showcase FPGA solutions for high-performance computing (HPC), low latency network acceleration and data analytics at the Supercomputing 2017 (SC17) Conference and Exhibition, November 13-16 in Denver, Colorado.

FPGA Acceleration Card with Stratix 10 FPGA
“FPGAs are being deployed in volume across a range of on-premise platforms and cloud infrastructure to achieve a step-change in application performance and energy-efficiency above and beyond what can be achieved using conventional processor technologies” said Craig Petrie, VP Business Development of FPGA Solutions at Nallatech. “We’re excited to be showcasing our new OpenCL-programmable ‘520’ product range featuring Intel Stratix-10 FPGAs. These server-qualified accelerator products have been engineered to cost-effectively solve demanding co-processing and real-time data ingest and enrichment applications.”

Nallatech will present two example applications featuring latest hardware and tools where FPGAs demonstrate significant value to customers:

Convolutional Neural Networks (CNN) – Object classification using a low profile Nallatech 385A™ PCIe accelerator card with a discrete Intel Arria 10 FPGA accelerator programmed using Intel’s OpenCL Software Development Kit. Built on the BVLC Caffe deep learning framework, an FPGA interface and IP accelerate processing intensive components of the algorithm. Nallatech IP is capable of processing an image through the AlexNet neural network in nine milliseconds. The Arria10-based 385A™ board has the capacity to process six CNN images in parallel allowing classification of 660 images per second.

KAFKA Ingest/Egress – Acceleration of KAFKA Producers using the advanced capabilities of Intel’s new Stratix-10 FPGA silicon and OpenCL Software Development Kit (SDK). This case study describes an analytic framework that provides up to 40 times increase in ingest performance enabling real-time data filtering and enrichment.

Additionally, Nallatech will display a range of leading-edge technologies at SC17 including:

520N™ Network Accelerator Card — A GPU/Phi-sized 16-lane PCIe Gen 3 card sporting four 100G network ports directly coupled to an Intel Stratix-10 FPGA. Four independent banks of DDR4 memory complete the balanced architecture capable of handling latency-critical 100G streaming applications.

520C™ Compute Acceleration Card – A GPU/Phi-sized 16-lane PCIe Gen 3 card, the OpenCL-programmable 520C™ features an Intel Stratix-10 FPGA designed to deliver ultimate performance per watt for compute-intensive HPC workloads.

About Nallatech:
Nallatech, a Molex company, is a leading supplier of accelerated computing solutions. Nallatech has deployed several of the world’s largest FPGA hybrid compute clusters, and is focused on delivering scalable solutions that deliver high performance per watt, per dollar. www.nallatech.com.

About Molex, LLC
Molex brings together innovation and technology to deliver electronic solutions to customers worldwide. With a presence in more than 40 countries, Molex offers a full suite of solutions and services for many markets, including data communications, consumer electronics, industrial, automotive, commercial vehicle and medical. For more information, please visit http://www.molex.com.

Nallatech exhibiting at SuperComputing 17 2017-11-14T08:36:03+00:00

OpenCapi Blog: Post 1

Datacentric Architectures

Molex/Nallatech Leverages OpenCAPI
for 200GBytes/s of Hyperconverged
NVMe Storage Bandwidth
By Allan Cantle

Over the last decade the computing industry has managed to deliver application performance improvements and better energy-efficiency for its customers by embracing parallelism, co-processor type acceleration and techniques to bypass and unburden the CPU. These have worked on the premise of maintaining the CPU centric nature of the server while effectively adding data centric enhancements.

To maintain this rate of incremental improvement, the industry is now embracing many more system level enhancements to the fundamental computing architecture and the CPU is becoming an important member in a fundamentally data centric architecture, rather than being at the heart of that architecture.  With this architectural shift the network fabric is becoming the critical piece at the center and we can see this evidenced by the plethora of new fabric standards including Omnipath, NVLink, OpenCAPI, GenZ, CCIX and Infinity fabric to name a few. Each of these fabrics claim to either solve a piece of or all the communication requirements for future data centric architectures.

OpenCAPI is enjoying the early mover advantage as an excellent open standard conduit, both metaphorically and physically, in facilitating this data centric industry shift. This becomes even more important when you realize that the industry cannot leave behind CPU centric legacy software that will need to continue running for many decades to come.

It is critical to understand that OpenCAPI is singularly focused on being the best coherent, low latency and high bandwidth (25GBytes/S Tx & 25GBytes/s Rx) interconnect for the hyperconvergence of data centric architectural pieces within a node. Consequently, it is looking for a complimentary fabric to support the ingress and egress of data to and from the node. This will be a topic for a later blog.  OpenCAPI based hyperconverged solutions must also become more programmable in a similar vein to those developed earlier on CAPI such as CAPI SNAP, Storage Networking & Acceleration Programming, and framework.

Nallatech is a pioneer of data centric computing using FPGAs, where computational functions are built around flowing data streams. It has 24 years of experience in successfully helping customers to migrate and deploy data centric heterogeneous architectures featuring FPGA technology. OpenCAPI was designed to leverage the strengths of FPGA architectures and minimize the impact of their weaknesses. Figure 1 shows a block diagram of Nallatech’s perspective of how the OpenCAPI bus is at the heart of enabling the true emergence of data centric architectures.

IMG-OpenCapi-Blog-Post1

Figure 1 OpenCAPI enabling Data Centric architectures through a Hyperconverged & Disaggregatable Architecture

Critical to this industry transformation is the open collaborations of all the industries experts with their differing skillsets. This openness, especially at the interface level, will help to ensure that the best ideas win out and that everyone can innovate around these new standards to deliver the best solutions to the industries customer base including the essential software infrastructure stacks that will make this technology easily accessible to application developers.

With Nallatech’s data centric heritage, Molex & Nallatech are taking decades of experience in tackling complex data centric problems.  These include HPDA applications such as video analytics & AI to classical memory bound HPC problems like the seismic migration algorithms.  These new system level solutions, based around OpenCAPI, will deliver over 5x performance gains at power levels that realistically begin to approach the DOEs 20MW Exascale target.

Additionally Nallatech will leverage OpenCAPI to ensure that valuable memory resources can be effectively shared with the CPU without breaking the essential support of the legacy CPU centric code base.

Come by the OpenCAPI, Molex & Nallatech booths #1587-#1589, #1263 & #1362 where we will be showcasing how our Sawmill FSA (Flash Storage Accelerator) development platform brings up to 200GBytes/s of hyperconverged accelerated storage to the Google/Rackspace Zaius/Barreleye-G2 POWER9 OCP Platform. The Sawmill FSA is designed to natively support the benefits of OpenCAPI by providing the lowest possible latency and highest bandwidth to NVMe Storage with the added benefits of OpenCAPI Flash functionality and near storage FPGA acceleration. HPDA applications such as graph analytics, in-memory databases and bioinformatics are expected to benefit greatly from this platform.

OpenCapi Blog: Post 1 2017-11-13T06:32:06+00:00

OpenCapi Blog

OpenCapi Blog: Post 1

Molex/Nallatech Leverages OpenCAPI for 200GBytes/s of Hyperconverged NVMe Storage Bandwidth By Allan Cantle Over the last decade the computing industry has managed to deliver application performance [...]

By | November 13th, 2017|Categories: OpenCapi Blog|Comments Off on OpenCapi Blog: Post 1
OpenCapi Blog 2017-11-02T14:35:18+00:00

Nallatech exhibiting at International SuperComputing 17

Nallatech, a Molex company, will showcase next generation OpenCL-programmable FPGA accelerator products for datacentre and cloud service applications at ISC17 being held in Frankfurt, Germany, June 19-23, 2017. The annual exhibition represents one of the largest gatherings of high performance computing (HPC) industry leaders and experts displaying the latest innovations.
 
ISC17 Announcement
Nallatech – ISC17 Booth C-1250 will feature hardware, software products plus design services for customers building scale out datacentres and cloud-based services leveraging FPGA technology.
“International Supercomputing is the perfect event for Nallatech to introduce the “520” – our next generation energy-efficient accelerator product featuring Intel Stratix 10 FPGAs,” said Craig Petrie, VP Business Development FPGA Solutions, Nallatech. “The OpenCL-programmable 520 delivers twice the core performance over previous-generation FPGAs with up to 70% lower power consumption. This unprecedented price-performance coupled with Nallatech’s application expertise and extensive manufacturing capabilities allow our customers to benchmark and deploy large-scale FPGA solutions within minimal cost and risk.” 
Advancements in architecture and high-level programming tools are opening doors for new FPGA use cases. For more information on how Nallatech streamlines FPGA integration and supports customers in the transition from prototyping to production, please visit www.nallatech.com
About Nallatech
Nallatech is a leading supplier of FPGA accelerated computing solutions. Since 1993, Nallatech has provided hardware, software and design services to enable customer’s success in applications including high performance computing, network processing, and real-time embedded computing.

About ISC17
ISC17 High Performance exhibition features the largest collection of HPC vendors, universities, and research organizations annually assembled in Europe. Together, they represent a level of innovation, diversity and creativity that are the hallmarks of the global HPC community. Having them all available on the same exhibition floor presents a unique opportunity for users to survey the HPC landscape and for vendors to display their latest and greatest wares.
Nallatech exhibiting at International SuperComputing 17 2017-06-19T12:18:12+00:00

Nallatech Officially Joins Dell Technology Partner Program

Nallatech Joins the Dell Technology Partner Program

Nallatech and Dell partner to offer High-Performance Computing in the Datacenter

Nallatech FPGA Accelerator - Molex

CAMARILLO, CA – March 11, 2017 – Nallatech, a Molex company, recently announced an official membership in Dell’s Technology Partnership Program. This new partnership will help accelerate the datacenter more efficiently than ever before.

Nallatech will continue to integrate FPGA Accelerators in Dell Servers, but this new partnership will help sway those on the fence that have not completely bought in to this model of computing. FPGA experts and newcomers will be empowered to utilize this mainstream method of FPGA algorithm development and deployment in the datacenter. With this ready-to-use solution FPGA programmers can focus completely on developing their own massively parallel and compute intensive applications while reducing power consumption and total cost of ownership.

Read more about Nallatech’s new partnership with Dell – Click here

The Dell Technology Partner Program
Nallatech is a Dell Technology Partner. The 385A and 510T FPGA OpenCL Accelerators are certified by Dell to run on Dell platforms that are specified in the above technical overview.

About Nallatech:
Nallatech, a Molex company, is a leading supplier of accelerated computing solutions. Nallatech has deployed several of the world’s largest FPGA hybrid compute clusters, and is focused on delivering scalable solutions that deliver high performance per watt, per dollar. www.nallatech.com.

About Molex, LLC
Molex brings together innovation and technology to deliver electronic solutions to customers worldwide. With a presence in more than 40 countries, Molex offers a full suite of solutions and services for many markets, including data communications, consumer electronics, industrial, automotive, commercial vehicle and medical. For more information, please visit http://www.molex.com.

Nallatech Officially Joins Dell Technology Partner Program 2017-03-22T13:49:36+00:00
Password Reset
Please enter your e-mail address. You will receive a new password via e-mail.