Blog

NI LabVIEW Part 1: Building Distributed and Synchronized FPGA Applications with Multiple C Series Chassis

NI LabVIEW Part 1: Building Distributed and Synchronized FPGA Applications with Multiple C Series Chassis

This blog series will examine applications requiring multiple C Series FPGA chassis. You may need multiple chassis because of a high channel count requiring more modules than which can fit in a single chassis. Or, you may have a large amount of FPGA logic that can’t fit on a single FPGA chip. If needed, the generalized architecture presented here may provide ideas for how to achieve tight synchronization between FPGA code running on these different chassis. 

As a quick overview of what we’ll discuss, we’ll be working towards an architecture shown below. Part 1 of this series will review how to achieve timing synchronization between the FPGA chassis using National Instruments 9469 time sync modules. Part 2 will look at how to integrate synchronized data acquisition with an RT host application. Part 3 will review how to achieve data sharing between FPGA chassis using NI 9853 CAN modules.

Diagram of distributed and synchronized FPGA applications
 
Our goal is simple:

Diagram of distributed and synchronized FPGA applications using C-series chassis 

Unfortunately, National Instruments hasn’t released the 60-slot mega chassis yet, so we’ll strive to achieve similar function and capabilities using multiple chassis with timing synchronization and data sharing. 

Part 1 of 3: Distributed FPGA Chassis Time Synchronization Using NI 9469 C Series Module

The first step to building up this architecture is to synchronize the clocks between the different FPGA chassis. We’ll achieve this using the NI 9469 time sync module that was designed specifically for synchronization requirements like this. 

The NI 9469 synchronization module is essentially a high-speed digital I/O module that encapsulates the underlying digital signals to provide a simple API for synchronization functionality.

The NI 9469 synchronization module provides API for synchronization functionality

In general, you configure the 9469 module to operate as a node in one of several “Single Master, Multiple Slave” topologies. Every chassis that you use will have one of these 9469 modules. A few of these different topologies are shown here:

The NI 9469 module can be configured in a tree topology

The NI 9469 module can be configured in a star topology

The NI 9469 module can be configured in a daisychain topologyIn our own application, it made sense to use the daisychain topology, but that could change depending on your own requirements. 

The master chassis in this configuration can send a trigger (digital pulse) to one or more slaves, which upon receiving the triggers can perform some action(s) synchronously.

The API for the 9469 module includes two invoke nodes that are used in your FPGA code.

For Sending Triggers from a Master:
NI 9469 module invoke node for sending triggers from a master
And for Receiving Triggers at a Slave or Master (Master can receive its own trigger – useful for synchronizing functions on both a master and slave):
NI 9469 module invoke node for receiving triggers
To generate a clock that is synchronized across all of your FPGA chassis, you can use a loop on the master chassis that is timed to send a trigger at a particular frequency. In the example below, we’re sending a trigger every 400 ticks. For a 40 MHz FPGA clock, that will equate to a 100Khz trigger send frequency. The frequency at which we send triggers to all FPGA chassis will become our base clock frequency. We’ve also added a trigger enable register so that we can control when to send triggers from the host application.

Generate a clock that is synchronized across all of your FPGA chassis 
For receiving triggers, we can have the “Wait On Trigger” node in a while loop as well. This loop will be solely dedicated to receiving triggers which will then set an occurrence. That base clock occurrence can then be divided into other rates for use throughout the FPGA application.

Divide the base clock occurrence into other rates for the FPGA application

An example of this is shown below. Here, the base clock is divided into multiple different clock rates that are used throughout the FPGA application. The event divider VI counts the number of iterations it is called and outputs true on an interval that is wired into its input. Here, a divider of 1 yields the same frequency, 100 KHz. A divider of 20 yields 5Khz, etc. If we have multiple loops that will run at a particular frequency, then we can set multiple occurrences as in the 5Khz clock in this example. 

The base clock is divided into multiple different clock rates that are used throughout the FPGA application 

In addition to driving different loop rates, we will also be incrementing a base clock register. This base clock register can then be used for time stamping channel values, which will be described in Part 2: Synchronized Data Acquisition Across Distributed FPGA Chassis.

Since each chassis has its own copy of the base clock register, they are all incrementing off the same triggers from the master chassis. The base clock register values will always be the same across all chassis.

On initialization of the FPGA chassis, it is helpful to sync up the base clock registers via a procedure on RT. In general, a routine from the host application that follows the following steps will sync up the base clock registers on startup:

  1. Disable all triggers
  2. Reset base clock register to 0 on all chassis
  3. Enable “Wait on Trigger” loops on all slave chassis
  4. Enable “Send Triggers” loop on master chassis

We’ve now completed reviewing the core components of timing synchronization between chassis. In addition to what we’ve reviewed here, you can augment these components with additional fault handling:

  1. If a chassis is missing trigger pulses, we need to make sure this is detected and that loops will continue to execute in absence of triggers being received from the master. You would need to switch to using the local clock in this scenario.
  2. Use an additional trigger line for verifying the base clock trigger line. For example, if you send a trigger on the second trigger line once for every 1,000 triggers on your base clock trigger line, then you can count and ensure you’re always receiving 1,000 triggers between triggers you receive on that second trigger line.

That wraps it up for Part 1, stay tuned for Part 2: Synchronized Data Acquisition Across Distributed FPGA Chassis!

Learn more about DMC's FPGA programming expertise.

Comments

August Karlstedt
# August Karlstedt
Hi Hardik,

Jeremy has posted part two of his series here: NI LabVIEW Part 2: Synchronized Data Acquisition across Distributed FPGA Chassis

Hardik
# Hardik
Hello i have read your blog on this link https://www.dmcinfo.com/latest-thinking/blog/id/9322/ni-labview-building-distributed-and-synchronized-fpga-applications-with-multiple-c-series-chassis

i want remaining two part of the same.

Could you please help me on this?

you can mail me on h.g.gujarati@gmail.com

Post a comment

Name (required)

Email (required)

CAPTCHA image
Enter the code shown above:

Related Blog Posts