This blog series will take a look at applications requiring multiple C Series FPGA chassis. You might need multiple chassis because of a high-channel count requiring more modules than can fit in a single chassis. Or, you may have a large amount of FPGA logic that can’t fit on a single FPGA chip. If your application also requires tight synchronization between FPGA code running on these different chassis, then the generalized architecture presented here could provide some ideas for how to accomplish those requirements.
As a quick overview of what we’ll discuss, we’ll be working towards an architecture shown below. Part 1 of this series will review how to achieve timing synchronization between the FPGA chassis using National Instruments 9469 time sync modules. Part 2 will look at how to integrate synchronized data acquisition with an RT host application. Part 3 will review how to achieve data sharing directly between FPGA chassis using NI 9853 CAN modules.
Essentially, our goal is simple:
Unfortunately, National Instruments hasn’t released the 60-slot mega chassis yet, so we’ll strive to achieve similar function and capabilities using multiple chassis with timing synchronization and data sharing.
Part 1 of 3: Distributed FPGA Chassis Time Synchronization Using NI 9469 C Series Module
The first step to building up this architecture will be to synchronize the clocks between the different FPGA chassis. We’ll achieve this using the NI 9469 time sync module that was designed specifically for synchronization requirements like this.
The NI 9469 synchronization module is essentially a high-speed digital I/O module that encapsulates the underlying digital signals to provide a simple API for synchronization functionality.
In general, you configure the 9469 module to operate as a node in one of several “Single Master, Multiple Slave” topologies. Every chassis that you use will have one of these 9469 modules. A few of these different topologies are shown here:
In our own application, it made sense to use the daisychain topology, but that could change depending on your own requirements.
The master chassis in this configuration can send a trigger (digital pulse) to one or more slaves, which upon receiving the triggers can perform some action(s) synchronously.
The API for the 9469 module includes two invoke nodes that are used in your FPGA code.
For Sending Triggers from a Master:
And for Receiving Triggers at a Slave or Master (Master can receive its own trigger – useful for synchronizing functions on both a master and slave):
To generate a clock that is synchronized across all of your FPGA chassis, you can use a loop on the master chassis that is timed to send a trigger at a particular frequency. In the example below, we’re sending a trigger every 400 ticks. For a 40 MHz FPGA clock, that will equate to a 100Khz trigger send frequency. The frequency at which we send triggers to all FPGA chassis will become our base clock frequency. We’ve also added a trigger enable register, so that we can control when to send triggers from the host application.
For receiving triggers, we can have the “Wait On Trigger” node in a while loop as well. This loop will be solely dedicated to receiving triggers which will then set an occurrence. That base clock occurrence can then be divided into other rates for use throughout the FPGA application.
An example of this is shown below. Here, the base clock is divided into multiple different clock rates that are used throughout the FPGA application. The event divider VI counts the number of iterations it is called and outputs true on an interval that is wired into its input. Here, a divider of 1 yields the same frequency, 100 KHz. A divider of 20 yields 5Khz, etc. If we have multiple loops that will run at a particular frequency, then we can set multiple occurrences as in the 5Khz clock in this example.
In addition to driving different loop rates, we will also be incrementing a base clock register. This base clock register can then be used for time stamping channel values, which will be described in Part 2: Synchronized Data Acquisition Across Distributed FPGA Chassis.
Since each chassis has its own copy of the base clock register, they are all incrementing off the same triggers from the master chassis. The base clock register values will always be the same across all chassis.
On initialization of the FPGA chassis, it is helpful to sync up the base clock registers via a procedure on RT. In general, a routine from the host application that follows the following steps will sync up the base clock registers on startup:
- Disable all triggers
- Reset base clock register to 0 on all chassis
- Enable “Wait on Trigger” loops on all slave chassis
- Enable “Send Triggers” loop on master chassis
We’ve now completed reviewing the core components of timing synchronization between chassis. In addition to what we’ve reviewed here, you can augment these components with additional fault handling:
- If a chassis is missing trigger pulses, we need to make sure this is detected and that loops will continue to execute in absence of triggers being received from master. You would need to switch to using the local clock in this scenario.
- Use an additional trigger line for verifying the base clock trigger line. For example, if you send a trigger on the second trigger line once for every 1,000 triggers on your base clock trigger line, then you can count and ensure you’re always receiving 1,000 triggers between triggers you receive on that second trigger line.
That wraps it up for Part 1, stay tuned for Part 2: Synchronized Data Acquisition Across Distributed FPGA Chassis!
Learn more about DMC's FPGA programming expertise.