Blog

Creating a Modular LabVIEW Application (Part 1 of 3): Creating an Expandable Data Format

Creating a Modular LabVIEW Application (Part 1 of 3): Creating an Expandable Data Format

One of LabVIEW’s strongest points is that it is a very “approachable” programming languages. Since it is a graphical language, it is relatively easy for a “non-programmer” to get a quick, simple, stand-alone program up and running. A lot of times, especially if you are just trying to get a quick data sample from something, this is all an engineer or technician might need.

However, there are also instances when a process is complex enough that, not only do you need to create multiple VI’s to complete a task, but also you need to create a complex, organized architecture to tie it all together. This is the first part of a 3 part series of blogs that will list some helpful “lessons learned” for creating a robust, universal, modular architecture in LabVIEW.

For this series, I will focus on the example of a simple system consisting of a serial RS-232 environmental condition monitor that gives you weather data (temperature, humidity, pressure, etc.) for your test, and then you have a NI PXI chassis with digital IO and analog inputs to control your test and monitor the sensors. One common approach is to separate out different data acquisition devices so that 1 VI controls a specific hardware device. It might make sense to create 1 VI to control and sample the serial data, 1VI to control and sample the chassis data, and 1 VI that would serve as a master to and merge the data from both.

In this simple example, it might be easy enough to hardcode everything as far as what comes from where and how the data all fits together. However, once you have played this game enough times, you realize that to be done right and to allow for future flexibility (i.e. what to do if you add a 3rd instrument to give you more data sometime down the road), you need to be very careful about how you design your architecture.

At DMC, we have spent years doing this very thing. This first important piece to figure out is what your data is going to look like and how to align all of it. Generally you don’t want to hold it all in memory for an entire test, so finding a way to create data files (such as NI TDMS files) or log to a SQL database with some universal timestamp is crucial. This will allow your master VI to go back to these data logs and run a merging routine on the data to provide your amalgamated test data set.

Both TDMS and SQL formats will force you to streamline your data into universal formats, instead of creating a data format too specific to your test alone that it can’t be added to or modified without major headaches. It also should put everything into 1-D time-stamped data channels that can be time aligned and merged. One very useful tool for this, especially if you have several PCs each with their own chassis is a PXI-6653 hardware clock sync card. This card will let you perfectly align the hardware sampling from NI instruments so you know the data on PC 1 is in sync with that on PC 2.

Thus, in summary, planning a methodology for data merging is a solid first step for ensuring that your LabVIEW system will be easily manageable and provide a solid base for expandability into the future. With a universal format for data storage, you will be able to add and or remove pieces to your system as needs arise, and you won’t find yourself shoe-horned into specific format that isn’t scalable and only works with 1 test setup.

See Also:

Learn more about DMC's LabVIEW programming services.

Comments

There are currently no comments, be the first to post one.

Post a comment

Name (required)

Email (required)

CAPTCHA image
Enter the code shown above: