DMC, Inc. https://www.dmcinfo.com RSS feeds for DMC, Inc. Blog 60 https://www.dmcinfo.com/latest-thinking/blog/id/10572/dmc-quote-board--march-2024#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10572 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10572&PortalID=0&TabID=61 DMC Quote Board - March 2024 https://www.dmcinfo.com/latest-thinking/blog/id/10572/dmc-quote-board--march-2024 Visitors to DMC may notice our ever-changing "Quote Board," documenting the best engineering jokes and employee one-liners of the moment. 

Learn more about DMC's company culture and check out our open positions!

]]>
Jane Rogers Wed, 13 Mar 2024 12:35:00 GMT f1397696-738c-4295-afcd-943feb885714:10572
https://www.dmcinfo.com/latest-thinking/blog/id/10568/early-adoption-of-nextjs-app-router-in-production-our-thoughts#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10568 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10568&PortalID=0&TabID=61 Early Adoption of Next.js App Router in Production: Our Thoughts https://www.dmcinfo.com/latest-thinking/blog/id/10568/early-adoption-of-nextjs-app-router-in-production-our-thoughts In May of 2023, DMC won a new project for a long-time partner of ours who specializes in separation technologies. The project consisted of a complete rewrite of their order management web portal. The old solution our partner was utilizing was written a long time ago, and none of the current engineers at their corporation confidently knew how the code functioned. This made it difficult to modernize or upgrade the site. Long story short, DMC’s task was to upgrade this early 2000s, slow web application:

Client's old website

To a faster, more modern one with a great focus on improving user experience. Some pages on the old platform took up to five minutes to load on a good day: 

Updated Client's website with DMC's new Interface

As with every major rewrite, an important early step in the process is to choose the most appropriate stack for the needs of the application. DMC's go-to stack for writing web apps consists of a React SPA front end (Single Page Application (SPA); typically scaffolded with the create-react-app tool) connected to a .NET web API backend. 

However, this project coincided with CRA starting to reach its end-of-life cycle, as well as some significant changes in the web development ecosystem. At the beginning of 2023, it seemed like the community consensus started to shift from “Client heavy” to “Server rendered; client hydrated” applications. The React.js team was also starting to adopt this trend with the announcement of React 18 that shipped with features like React server component and suspense that embrace this “server first” approach of web development.

All that to say, our partner’s web portal project was a great first opportunity for DMC to try out a new web development stack.

We had two options: either use Vite.js as an identical replacement to CRA, hook it up to a .NET backend and use our traditional development workflow, or opt in for a full stack framework like Next.js that has just become the recommended way to scaffold a React application in the “react.dev” docs. After carefully analyzing the requirements for the client’s web portal, it was obvious that the app is mostly built around heavy routing, forms, fetching, and mutating data; therefore, a client heavy solution (an SPA) seemed like overkill, and we landed on the second option. 

Coincidentally, the Next.js team has just announced the release of their brand-new app router leveraging the power of the new React 18 features mentioned above.

As you may have guessed from the title, DMC took this new app router for a ride in a production application three weeks after its release. 

Before we take a deep dive into what our team thought of this experience, here is what ended up being the entire stack for the client’s web portal rewrite. 

Client's Web Portal Rewrite

The rest of this article will be divided into the pros and cons of Next.js and whether we think it was a good idea to adopt the app router so soon after its launch. 

The Pros

1. Code Collocation

This is by far the main advantage of using a full stack framework. The fact that you have your backend and frontend code living and running in the same environment reduces a lot of the complexity that comes with the “traditional” SPA + JSON backend architecture like CORS security and making sure the server and the client are in sync at all times. 

2. Routing

Anyone who has written a React application knows that routing has always been a pinpoint of the library. Traditionally, we had to opt in for a tool like React router, which came with a lot of boilerplate and overhead. Let’s look at the React-router setup for a simple three-route app. We can clearly see how cumbersome this might become in the case of a much larger application.

Routing Library

On the other hand, routes in Next.js are built automatically at build time using the folder structure within your app router. As we can see in this example, folders are used to define routes. A route is a single path of nested folders following the file-system hierarchy from the root folder down to a final leaf folder that includes a page.js(ts) file. 

File Structure

Besides reducing overhead and boilerplate code, this opinionated file structure significantly improves the developer experience. This is because, by looking at the URL, we can directly target the part of the codebase responsible for running the page: which makes troubleshooting and debugging much easier. 

The file structure in Next.js app router does not only offer basic routing. It also provides special files like “layout.js(ts)” that provide a way for developers to share UI between multiple pages nested under the same folder path segment (and therefore under the same web route segment). On navigation, layouts preserve state and do not re-render. Here is an example from the Next.js docs that showcases this feature: 

Example from the Next.js

Similarly, Next.js makes it seamless to handle errors and loading states by leveraging the power of React’s new error boundary and suspense features. This is another Next.js feature that lifts away the complexity of manually handling loading and error states (via “useState” or any other state management mechanisms). Let’s take a look. 

The error.js file convention allows you to gracefully handle unexpected runtime errors in nested routes. Error.js automatically creates a React Error Boundary that wraps a nested child segment or page.js component. The React component exported from the error.js file is used as the fallback component. If an error is thrown within the error boundary, the error is contained, and the fallback component is rendered.

Errorsjs File Convention

Similarly, the loading.js file follows the same convention by wrapping its segments within a React suspense that offers a fallback UI to be used when those segments are waiting for the data; they need to complete their rendering.

3. Data Fetching

Now, on to my favorite part; data fetching. Getting data to power your UI is a fundamental building block of modern web development, and to be honest, is an area that React.js has historically struggled with. Traditionally, React developers had to manually roll out their data fetching solutions using the controversial “useEffect()” hook (and all the problems that came with it, but we will leave that for another discussion). Or, alternatively, opt in for a third-party library like React query, which means adding more external dependencies to your project. 

With React 18, the core team announced a new way of fetching data: React Server Components (or RSCs). RSCs are no different than your traditional React components in the sense that they are simply functions that return “JSX” elements to be converted to HTML that is then displayed on your web browser. The uniqueness of RSCs comes from the fact that you can render them on the server. This is, in my opinion, very powerful because your UI is now closer to your data, which means that it can now integrate with typical server-side operations like Input/Output. In other words, React server components give you the ability to directly access your storage resources at the component level and use the data from those resources to populate the UI. Here is an example: 

Async Function UI

Yes, it is that simple! (And yes, RSCs are allowed to be async since they are rendered on the server). 

Next.js leverages the power of both traditional (or “client” if you will) and the brand-new server components allowing developers to use the same language and the same framework to write code on both the server and the client without too much context switching.

One thing worth noting here is that each environment (server/client) has its own set of abilities and constraints. As a result, there are certain operations (mainly I/O) that are better suited for the server, whereas interactivity (and any event/state driven operations really) should be left to the client. Luckily, Next.js is very flexible when it comes to crossing that boundary between the server and the client. By default, the root component for a given tree (the one exported by the page.tsx file) is a server component. That is typically a good place to fetch data or perform any server-side logic. The data is then passed down to children of that component that can either be client or server components depending on how much interactivity is needed in that component. By simply adding the “use client” directive at the top of your component file, Next.js will know that the logic driving this component needs to be shipped to the browser. 

I think it is helpful to think of the flow of the code in your application as unidirectional. In other words, during a response, your application code flows in one direction: from the server to the client. Here is a good diagram that showcases this point:

Diagram of Code Flows from Server to Client

The advantages of the server rendered by default approach are numerous. Your web application will now have a significantly better SEO (search engine optimization) when compared to an SPA because a web crawler will now see a fully-fledged html page after sending a request to your app’s URL rather than an empty shell that only gets completed after reaching the client. This also allows for a faster initial page load as we are not waiting for the client to get all the JavaScript from the server and run it to render the UI. Speaking of which, server-side rendering significantly reduces JavaScript bundles sizes being shipped to the client as part of the UI rendering is now done on the server.

The Cons:

1. A steep learning curve: 

For experienced React developers, Next.js and React server components introduce a new mental model that requires some time to get used to. In fact, the team on the client’s project ran into several issues and bugs in the early stages of development simply because we were treating a Next.js application the same way we treat traditional single page applications; however, once the server first approach clicked and we started getting better at drawing the boundary between the server and the client, we started to see the benefits of Next js’s app router.

2. Limited resources and third-party libraries support: 

Adopting a new technology a few weeks after its release is always a challenge. Learning materials and tutorials were very hard to find and best practices and paradigms around the framework were not fully established. Furthermore, many libraries were not compatible with Next’s app router and RSCs which resulted in some difficulty setting up things like authentication and styling; however, to be fair, I have to say that this has gotten better with time and that it is much easier to start with Next.js today compared to 8 months ago.

3. Caching: 

I think this is really the only Next.js feature I dislike. By default, Next.js will try to cache as much as it can to enhance your app’s performance. This is achieved by adding multiple layers of caching mechanisms both on the server and on the client (I will not go into details here as this is quite a complex topic). The problem with this approach is that it can introduce very subtle and hard to resolve bugs to your application by causing stale data to appear throughout your UI, which leads to your server and client state going out of sync. To summarize, a new Next.js developer must take the time to read through the documentation and make sure they understand the ins and outs of the Next.js caching philosophy.

Conclusion: 

With the client’s project being in the final user testing phase, I can confidently say that Next.js’s app router was a great choice for this rewrite. It came with clear enhancements to both the end user and developer experiences. Its opinionated file structure and code collocation made it very easy for our team to onboard engineers and later pass in the code base to the client for maintenance. I am glad that we took a chance on Next.js for such a big project because it was our chance to try out a modern approach to web development and expand the DMC Application Development toolbox.

Luckily, Next.js is getting more and more adopted, and the ecosystem around it is rapidly growing, which I am confident will eliminate most of the hurdles we experienced as early adopters. I'm excited to witness the future of Next.js and the entire React.js ecosystem with this new era of server side rendered React.

Learn more about our Application Development expertise and contact us for your next project. 

]]>
Aziz Rmadi Tue, 12 Mar 2024 08:00:00 GMT f1397696-738c-4295-afcd-943feb885714:10568
https://www.dmcinfo.com/latest-thinking/blog/id/10573/chicago-ski-trip-2024-da-wiskinson-cheeskinson#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10573 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10573&PortalID=0&TabID=61 Chicago Ski Trip 2024: Da WiSKInson CheeSKInson https://www.dmcinfo.com/latest-thinking/blog/id/10573/chicago-ski-trip-2024-da-wiskinson-cheeskinson Chicago winters can be brutal, but there is no better way to escape the hustle and bustle of the Windy City than with a day trip to Wisconsin to hit the slopes. DMC’s Chicago office decided to shut down their laptops and grab their snow gear for a day of skiing at the first annual Chicago ski trip: Da WiSKInson CheeSKInson. 

26 DMCers headed to Alpine Valley Resort in Walworth Count, Wisconsin, about 90 minutes away from DMC Headquarters. While many went skiing, skiing was not the only activity.

“We had a few people snowboard, me included,” Aleks Konstantinovic, Systems Engineer, said. “There was nobody who sat out, we were all hitting the slopes!” 

Chicago DMCers Posing on the Slopes

While there were many DMCers who had previously skied and snowboarded before, there were a handful of participants that skied for the first time at the event.

“We had some first timers, and they ended up signing up for lessons at the resort." Aleks said.

Group photo at the Restaurant

Alpine Valley Resort had a great assortment of slopes for skiers with different skill levels, according to Josh Wrobel, Systems Engineer.

“There were about six different lifts. Depending on which one you went up, there was a variety of terrain that you could ski on. Some of it was better suited for beginners, while other paths were more advanced," Josh said. "There were a few terrain parks where people were doing tricks. Some of the harder paths were more fun for me because they were longer, but I didn’t have too much of a preference. Going with friends made it really fun!”

Chicago DMCers Posing on the slopes

Selfie Time!

A particular moment of the day stood out for Josh.

“I got stuck on a chairlift for about 25 minutes with a few other DMCers. That was pretty memorable!”

Group Photo on the Slopes

Da WiSKIson CheeSKIson was a successful event that many hope will make a comeback next year.

“I think overall it’s just a fun trip and would be happy to do it again next year!” Josh said. 

Learn more about DMC's culture and check out our open positions!

]]>
Sofia Sandoval Mon, 11 Mar 2024 19:45:00 GMT f1397696-738c-4295-afcd-943feb885714:10573
https://www.dmcinfo.com/latest-thinking/blog/id/10548/how-to-configure-nextgen-archiving-in-wincc-oa-to-use-a-microsoft-sql-server#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10548 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10548&PortalID=0&TabID=61 How to Configure NextGen Archiving in WinCC OA to use a Microsoft SQL Server https://www.dmcinfo.com/latest-thinking/blog/id/10548/how-to-configure-nextgen-archiving-in-wincc-oa-to-use-a-microsoft-sql-server This two-part blog series is intended to be a step-by-step overview on how to set up and utilize an MS SQL Server and WinCC OA’s NextGen Archive (NGA). Information for a general setup exists via the WinCC OA Documentation (see Further Reading/Links), but this walkthrough aims to be more detailed and explicit in the necessary steps.

  1. How To Create a Microsoft SQL Server Install for NextGen Archiving
  2. How to Configure NextGen Archiving in WinCC OA to use Microsoft SQL Server

Table of Contents:

  1. Notes/Prerequisites
  2. WinCC OA
    1. 2.1 Project Setup
    2. 2.2 NGA Configuration
    3. 2.3 Archive Group Configuration
    4. 2.4 DPE Archive Configuration
    5. 2.5 Data Retrieval
  3. Further Reading/References

1. Notes/Prerequisites

Required programs

This demo was implemented using:

  • WinCC OA 3.18 P006
  • Microsoft SQL Server 2022 Express
    • NOTE: Other versions of MS SQL may work with NGA, but it has not yet been verified by DMC.
  • Microsoft SQL Server Management Studio 18
  • Windows 11

Assumptions:

  • Proper licensing for NGA is configured.
  • The OS user has Windows administrator privileges.

2. WinCC OA

2.1 Project Setup

Back to Table of Contents

  1. Create a new project with NGA configured.
    1. When creating a new project, proceed with the project setup as normal.
    2. Under the “General Settings” step, ensure that the Use NextGenArchiver option is selected.
    3. For this demo, I’m creating a project titled NGA_Demo located in the “C:/WinCC_OA_Proj” directory.

Screenshot 1

If converting an existing project from HDB/RDB to NGA, then follow these steps in the “Converting existing project to NextGen Archiver Project” section: NGA Notes and Restrictions

2.2 NGA Configuration

Back to Table of Contents

It’s time to fire up OA and open GEDI.

  1. Create new back-end.
    1. Navigate to the “Database Engineering” window via “SysMgm/Database/Database Engineering”.
    2. Under the “Backend list”, click the + icon (Add new backend).
    3. Name the Backend a user-friendly name.
      1. I used the title MSSQLEXPRESS.
      2. NOTE: The Backend name does not need to match the server name, so use a name that makes most sense for your application.

Screenshot 2

Screenshot 3

2. Configure the MS SQL Backend “General Settings – Basic Configuration”. 

  1. Specify the following parameters:

Parameter

Value

Name

<User-friendly backend name>

Profile

MSSQL_nonRedundant

Database Connection

<host>/<SERVER NAME>

Database Username

<winccoaUsername from db.windows.config>

  1. NOTES:
    1. The Profile option cannot be changed later.
    2. If using a redundant server, use the MSSQL option for Profile.
    3. Don’t worry about specifying the Password, the field will clear upon initial configuration saving.
  2. Click the Password field, enter the winccoaPassword from db.windows.config, and click OK.

Screenshot 4

Screenshot 5

3. Configure the MS SQL Backend “Extended Settings”.

  1. Specify the following parameters:

Parameter

Value

Database Control/ Execution File

NGAMSSQLServerBackend

Database specific configuration/db.database

< dbName from db.windows.config>

Screenshot 6

4. Finish MS SQL Backend setup

  1. Select the Active option.
  2. Click Save.

Screenshot 7

2.3 Archive Group Configuration

Back to Table of Contents

Now that the database connection has been established, we can set up Archive Groups.

  1. Create new archive group
    1. Navigate to the “Runtime Engineering” window via “SysMgm/Database/Runtime Engineering”.
    2. Under the “Archive Groups”, click the + icon (Add a new group).
    3. Name the Archive Group.
    4. I used the title DEMO.
    5. Ensure the Active option is selected.
    6. Configure the “Storage Settings” section as desired.
    7. Click Save.

Screenshot 8

Screenshot 9

Screenshot 10

Screenshot 11

2. Verify SQL Archive Group Creation

  1. Open Microsoft SQL Server Management Studio.
  2. Right click dbo.archive_groups and select the Select Top 1000 Rows option.
    1. The new archive group should be visible in the “Results” section.

Screenshot 12

2.4 DPE Archive Configuration

Back to Table of Contents

Now that we’ve created an archive group, we can apply the archive group to a DPE so that its historical data can be logged and tracked.

  1. Insert _archive config
    1. Within PARA, right click the target DP or DPE and select the Insert config option.
    2. Select Archive settings.
    3. Click OK.

Screenshot 13

Screenshot 14

2. Select archive group

  1. Underneath the target DP/DPE, select the new _archive option.
  2. Select the desired archive group in the “Archive Group” drop-down.
  3. Ensure the Active option is selected.
  4. Click OK.

Screenshot 15

3. Verify SQL Archive Group Application

  1. Open Microsoft SQL Server Management Studio.
  2. Right click dbo.elements and select the Select Top 1000 Rows option.
    1. The newly configured DPE(s) should be visible in the “Results” section. 

Screenshot 16

2.5 Data Retrieval

Back to Table of Contents

Now that DPEs have been configured with archiving capabilities, we can now retrieve DPEs’ historical data.

  1. Verify SQL DPE Archiving.
    1. Open Micorosft SQL Server Management Studio.
    2. Right click dbo.event_<segment_id>_a and select Select Top 1000 Rows.
      1. The segment_id for each archive group can be found in the dbo.segments table.
    3. If values have been changed since archiving was configured, entries should be visible in the “Results” section.

Screenshot 17

2. Retrieve historical data.

  1. The WinCC OA functions dpGetPeriod() and dpQuery() can be used to retrieve historical data.
    1. The two examples below demonstrate a test panel running each of the two aforementioned functions and printing the results to the Log Viewer.
    2. The returned data should reflect that viewed in the Micorosft SQL Server Management Studio tables.
  2. Be sure to understand your server’s backup and retention policy to determine what data and how much historical information can be accessed.

NOTE: For help writing SQL queries, use the SQL Panel found in SysMgm/Reports/SQL-Query

Screenshot 18

Screenshot 19

Screenshot 20

3. Further Reading/References

Learn more about our Manufacturing Automation and Intelligence expertise and contact us for your next project. 

]]>
Nick Leisle Wed, 06 Mar 2024 16:46:00 GMT f1397696-738c-4295-afcd-943feb885714:10548
https://www.dmcinfo.com/latest-thinking/blog/id/10571/fun-at-dmc--volume-21#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10571 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10571&PortalID=0&TabID=61 Fun at DMC - Volume 21 https://www.dmcinfo.com/latest-thinking/blog/id/10571/fun-at-dmc--volume-21 Check out all the fun DMCers have had over the past month! 

Boston

The Boston office went axe throwing! 

Boston DMCers went Axe Throwing - Group Photo

Chicago

Chicago DMCers had happy hour at the office with the help of Drinkbot, the robot bartender!

Chicago DMCers getting drinks from DrinkBot!

The Chicago office also had pizza happy hour! 

Pizza Happy Hour at the Chicago Office!

Dallas

The Dallas office had fun at Puttshack! 

Dallas DMCers having fun at Puttshack!

Denver

The Denver office hosted the annual DMSki extravaganza! 

Group SKi Photo

Skiing in Denver

Denver also had their holiday party! 

Group Photo at the Denver Holiday Party

Houston

A few Houston DMCers went to practice their swing at Top Golf

Houston DMCers at Top Golf

They also went to Escape It Houston and did an escape room!

Houston DMCers having fun attempting an Escape Room

The Houston office also attempted to Skeet Shoot! 

Skeet Shooting attempt

Skeet Shoot Group Photo

San Diego 

The San Diego office went bowling and had a nice dinner afterwards with our founder Frank!

San Diego Group Photo with Frank at a bowling outing

St. Louis

The St. Louis office went to Sandbox VR and played VR video games!

St. Louis VR Gaming

ST. Louis VR Gaming

 Learn more about DMC's culture and explore our open positions

]]>
Greg Kimura Tue, 05 Mar 2024 22:57:00 GMT f1397696-738c-4295-afcd-943feb885714:10571
https://www.dmcinfo.com/latest-thinking/blog/id/10542/how-to-configure-a-supertrak-system#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10542 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10542&PortalID=0&TabID=61 How to Configure a SuperTrak System https://www.dmcinfo.com/latest-thinking/blog/id/10542/how-to-configure-a-supertrak-system How to Configure a SuperTrak System

SuperTrak is a smart conveyance system that uses electromagnets to move pallets around a track with very fine precision. The nature of SuperTrak’s magnet-based controls offers several benefits over a more typical conveyance system by allowing each pallet to move independently of the rest of the pallets and providing real-time, precise location data for each pallet on the track.

In this blog we will walk through the basics of configuring a SuperTrak system.

Step 1: TrakMaster and SuperTrak Simulation

To begin working with SuperTrak, you will need to get the TrakMaster and SuperTrak Simulation tools from SuperTrak support.

To run the SuperTrak Simulation, simply run the program after installing. The simulation will start running and you should see this window.

Photo 1

After installing and running the TrakMaster software, you should see an option to connect to a system with the IP address used by the simulator (127.0.0.1). Click on the system and hit “Connect” to get to TrakMaster’s main window.

Photo 2

TrakMaster will ask you if you want to get started with a tutorial. For now, click “close” to continue to TrakMaster. 

Note: if you want more information on SuperTrak configuration after reading this blog, you can return to the default tutorial the next time you connect to the simulator with TrakMaster.

From TrakMaster, go to “Setup” > “Quick Start” to initialize your system to a common default loop. 

To provide a more thorough understanding of the process, we’re going to start our track from scratch.

Photo 3

Step 2: Define the Track Layout

In TrakMaster, expand the “Advanced” tab and go to system layout. Here you can define the layout of your track. 

For this example, we are using a 3-meter-long loop with wide curves, but you can add/remove additional track to fit your system using the Append/Insert/Remove buttons at the bottom of the screen. You can also select* if your track is a loop or not and determine the standard flow of direction. 

Note: For loop tracks, the direction “Right” means that pallets will move to the right on the bottom half of the loop, a counterclockwise movement if you are looking at the track from above.

Photo 4

Next, hit the “Save” button at the top of the screen. You will get a prompt about what you want to save. Make sure at minimum “System Layout” is checked, but also feel free to select all and then hit ok. 

Next, you’ll be asked if you want to reset the syste. Feel free to hit “Yes”.

Now if you go back to the “System Dashboard” window you should be able to see your track layout.

Photo 5

Step 3: Configure Track Parameters

Now that your track layout is configured, it is time to define the details of your track motion. 

Navigate to the “Global Parameters” section and look through the various details listed.

Photo 6

These parameters affect all sorts of things about the track but, most importantly, set the basic motion parameters for how pallets will move.

Note: We won’t go into every parameter listed but make sure you adjust the maximum velocity and acceleration for your system to an appropriate level. Also, set the pallet length and pallet shelf offset to match the tooling you are going to install on your pallets.

It is often a good idea to lower maximum velocity and acceleration during commissioning while the status of the track/system is still in flux.

Step 4: Define Regions

One of the most versatile tools that SuperTrak has is its regions. 

Regions are defined sections of track where the parameters of the track are changed.

To begin making regions, go to the “Global Parameters” section and navigate to the “Regions” tab. Here, double click on an empty region ID to begin configuring. 

Photo 7

In the above example I created a region with a larger pallet gap than the rest of the track. 

Regions like this can be useful if you are loading large objects onto your pallets that would collide if the pallet gap distance was not increased.

Take some time to think about what regions your system might need. A common region type requires you to define an area where pallets must move more slowly to allow something to interact with the pallet. Such as a burst of air that might clean the pallet of any lingering dust.

Photo 8

Once you have created your regions you can go back to the “System Dashboard” to see them laid out on the track. 

From the “Regions” tab on the right, you can also adjust the positions of any regions to ensure they match up with the desired region locations for your system.

Photo 9

Note: Regions cannot overlap, if you need to create overlapping region effects, you will need to divide your regions into pieces and set them all up independently of each other. 

Step 5: Setup Targets

Targets are the primary way SuperTrak controls which pallets go where. Typically, when a pallet needs to move from one location to another, the target it is located at is told to forward its pallet to the next target.

A target should be placed at each location that a pallet will need to stop. To create targets, go to the“Targets” tab under the “Global Parameters” section. Here, you can enter the section and position you want each target to be in.

Photo 10

Here I’ve created a few targets around my track and given them some basic descriptions to make it clear which target is which.

Notice that when you go back to the System dashboard, you will see your targets represented on the screen. 

Photo 11

Step 6: Simulate Your Track

Now that we have our track set up, we can start simulating the track. Simulation will not only allow us to verify that we’ve set up the system correctly, but will also allow us to begin running tests on our system to determine roughly how long it will take for pallets to cross our track.

Step 6-a: Change Track Settings for Simulation

There are a few settings we’ll need to adjust to get your track running in Simulation properly.
First, go to the “System Layout” page under the “Advanced” section. From there, set the “Enable Control” option to “TrakMaster”

Photo 12

Next, go to the “Sections” tab and set the “Load Target” for each section. 

The “Load Target” is the target ID that pallets on each section will head to on system startup. You can select “Copy to all Sections” to give all sections the same load target, or you can set it individually for each section.

Photo 13

Now go back to “Global Parameters” and set “Assign ID to new pallets” to true. This will cause SuperTrak to assign a number to each pallet to help you keep track of them as they move around the system. 

If your system uses an IR reader to assign pallet IDs you will want to change this later, but for simulation, you will want this feature. Also make sure the “Enabled Simulation” box is set to true.

Photo 14

Finally, in the “Advanced” section, go to “Simulation Configuration” and define the positions you will want vehicles to start in for Simulation. 

Photo 15

Here, I’ve just created two pallets in random spots of the track.

Step 6-b: Run your Simulation

To begin running your Simulation press the “Save” button at the top and confirm the parameters you want to save. 

Then, when prompted, specify that you do want to restart the system. If you go to the systems dashboard, you should see the pallets you defined earlier appear on the track.

Photo 16

Now, when you press the “Enable System” button at the top of the screen the pallets should be directed to whatever you set as your load target.

Photo 17

Now, you can select a pallet in the “Pallets” tab on the right and direct it to a target by giving it a target ID and pressing “Go”.

Photo 18

Note: Make sure you watch the pallets as they cross targets to ensure that they’re reacting appropriately.

Step 6-c: Auto Release Pallets

Now that you have your simulation moving pallets around the track, you can take advantage of another SuperTrak feature. This feature allows to have your pallets move around the track using simple automation.

Simply go to the “Targets” section in the left-hand tab and expand the “Auto-release” section. 

Photo 19

Note: Here you can set a target to auto release a pallet by selecting “After a minimum” and inputting a time in milliseconds that the target will wait before sending the pallet. 

On its own though that won’t do anything, you still need to tell the target where to send the pallet to. 

Further down in the auto release tab set the “Use move configuration” option to “Local” and set the ID to 1.

Photo 20

Now go back to the “Global Parameters” section and under the “Move Configurations” tab, scroll down to the bottom where you should see options to create “Move Configurations” for each target. 

Photo 21

Create a configuration in Target 1 to move to Target 2. 

Now when your track is running, Target 1 will wait 400ms and then automatically forward its pallet to Target 2.

Redo this process so that each Target in your system forwards pallets to the next target and press the “Save” button. Now you should be able to go back to the “System Dashboard” and monitor your pallets as they continuously circle around the track.

Photo 22

This automatic releasing of pallets can be a useful tool for estimating the throughput of your system if you know roughly how long each process on your track will take. 

If you go to the “Statistics” page, you can even see useful information about the time spent heading to each target.

Photo 23

Step 7: Setup your System for PLC Control

While controlling your track through TrakMaster is useful for setting up and testing your configuration, typically, systems will ultimately want their SuperTrak system controlled by a PLC. 

SuperTrak has built-in functions for communicating with a variety of PLCs. For this blog we’re going to go through configuring the system to talk to a Rockwell PLC over Ethernet/IP, but much of the process for connecting to other PLCs is similar.

First, go to the “System Layout” page under “Advanced” and from “Enable Control” select either “Section Control” or “System Control”. These are both for controlling from an external system. The difference is whether you enable sections individually or all at once.

Photo 24

Now go to the “Control Interfaces” section. Here you can configure an EtherNet/IP interface. 

Modify one of the configurations by entering its IP settings. Note that these settings are not the IP address of your PLC, they are for the network card in the SuperTrak system that the PLC communicates with. This is also a different IP address than the one you connect to via TrakMaster.

Photo 25

Next, go to the “Data Layout” tab. Here you can set up what data gets sent to/from the PLC. 

Feel free to select “Defaults” > “Ethernet/IP” to use the recommended default values. But if your system uses a lot of targets or network IO, you can increase/decrease the amount of space used by each type of message.

Photo 26

Now select “Save”, restart your system, and you should be all set to start setting up your PLC to control the SuperTrak system!

Learn more about our Manufacturing Automation and Intelligence expertise and our open positions!

]]>
Jack Haskell Sun, 18 Feb 2024 21:18:00 GMT f1397696-738c-4295-afcd-943feb885714:10542
https://www.dmcinfo.com/latest-thinking/blog/id/10560/intro-to-cicd-pipelines#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10560 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10560&PortalID=0&TabID=61 Intro to CI/CD Pipelines https://www.dmcinfo.com/latest-thinking/blog/id/10560/intro-to-cicd-pipelines GitLab defines Continuous Integration (CI)/Continuous Deployment (CD) as “an essential part of DevOps and any modern software development practice.” I prefer to describe it as one of the coolest and niftiest parts of application development. CI/CD pipelines allow you to automate processes that ensure the quality of your code and deployments.

You can perform static code analysis to verify your code doesn’t include major vulnerabilities. You can run a linter if you’re nitpicky like I am and want to ensure new changes are formatted correctly. You can run unit tests to be sure the new code you wrote did not introduce regressions. You can deploy your web app to the cloud. You can create an installer for your desktop app. I could go on but will instead just repeat: the coolest and niftiest parts of app dev.

I should briefly note that the full definition of CI/CD (like Wikipedia notes here) defines that it can be considered more of an execution style: where CI is the frequent merging of several small changes into a main branch, and CD is the production of software in short, efficient cycles so that it can be continuously delivered.

While frequent integration of code into the main branch and the delivery of new features are very important, any references to CI/CD in the remainder of this blog will refer to the concepts of automated code validation and automated code deployment (via CI/CD pipelines).

Into the Nitty Gritty

Here’s some clarifying information before I rave further about CI/CD pipelines:

Continuous Integration (CI) — focused on building and testing code as it is committed to the remote repository.

Continuous Deployment (CD) — focused on deploying and releasing software.

Why Should You Care?

CI/CD pipelines are super nifty but do require time to set up. Someone needs to write the code that can run automatically, right? If you’re on a project with a tight timeline, it can be tough to justify devoting extra time to configuring your web app so it can deploy automatically (especially when you could just deploy it using the Azure CLI yourself).

But what happens when you have multiple developers working on the project who also need to deploy their changes to the web app? What if you get pulled to another project and can’t manually deploy? What if you forget to run the unit tests before deploying and you deploy broken code to your site?

Writing a CI/CD pipeline is an investment into the long-term health and maintainability of your code base. Continuous integration can verify that new code is up to par (via static analysis, automated unit test execution, code coverage, and linting). If a developer commits code that can’t build or doesn’t pass unit tests, your pipeline will flag that there’s an issue.

Continuous deployment reduces the overhead of deploying your code base. It ensures that each deployment step is executed when new code is deployed (none are forgotten through human error) and that a single person doesn’t have the overhead of manually deploying the app each time code needs to be pushed.

Environments

Setting up a CI/CD pipeline can also make it much easier to support multiple environments for your application. In the past, I’ve set up ‘development’, ‘staging’, and ‘production’ branches in addition to ‘main’. I’ve then configured my pipeline to deploy to the appropriate environment when code is merged into its matching branch.

This makes it easy to determine which code is live for each branch or to generate diffs between environments. It also means that you can push code after performing a pull request to the appropriate environment: giving you the opportunity to review the code before it goes live.

An important note regarding production: be sure to configure appropriate permissions or gates for triggering your pipeline to production. Those gates could be rules on the pull requests that merge to a production branch or restricting permissions of the users who can trigger a branch that deploys to production.

A fun note: you can also use IaC to deploy the resources that host your application based on each environment, but that’s a topic for another time.

How to Configure CI/CD

Most source code repositories (i.e. GitLab, Azure DevOps, GitHub, Bitbucket) offer their own flavor of the yaml file required to configure a CI/CD pipeline. Their documentation will be the most helpful resource on the specific syntax required to make their yaml files run.

Overview

Writing a CI/CD pipeline is most likely worth your time if:

  • You have multiple developers working on your code base.
  • You want to automatically test code quality whenever a developer pushes to the remote.
  • You want increased repeatability and reliability of your deployment processes.
  • You want to reduce developer overhead when new code needs to be deployed.
  • You want to support multiple environments.

(It’s also just super cool and nifty).

Learn more about DMC's Application Development services and contact us for your next project. 

]]>
Gabby Martinez Thu, 15 Feb 2024 09:08:00 GMT f1397696-738c-4295-afcd-943feb885714:10560
https://www.dmcinfo.com/latest-thinking/blog/id/10557/tools-for-debugger-logging-on-embedded-devices#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10557 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10557&PortalID=0&TabID=61 Tools for Debugger Logging on Embedded Devices https://www.dmcinfo.com/latest-thinking/blog/id/10557/tools-for-debugger-logging-on-embedded-devices An essential part of the development process for embedded devices is debugger logging. With multiple threads running simultaneously on a resource-constrained microcontroller, it can be hard to know where to start debugging.

Putting debugger breakpoints has been a general approach to debugging firmware as it allows engineers to view variable values and investigate the call stack; however, this interrupts the program flow, and, with multiple threads running at the same time, this can cause unexpected code behavior issues.

In cases where I have embedded devices in communication with each other or with an external PC application, I have found myself resorting to logging to debug code behavior. The tools that I have found helpful are RealTerm for capturing log entries and BareTail / Notepad++ for viewing log entries.

Prerequisites: The firmware, either custom or open-source libraries, should have been configured with debugging/logging output. If you need to add debugging output to your program, check out our Using Segger Real Time Transfer with an EFM32 blog for an example!

Tool 1 – RealTerm

RealTerm is a terminal program for serial communication and is one of the most frequently used tools in the embedded team at DMC to capture data transmission through various protocols (e.g. UART).

  1. In the Port tab, select the baud rate and COM port for the debugger output. You can check which port your debugger is attached to using Device Manager. Hit “Open” to connect to the port, and you should start seeing some logging statements in the display window.
  2. To capture the logging statements in a file, navigate to the Capture tab and select “…” to name the file and pick its location. You can then click “Start: Overwrite” to route the logging statements to the file; however, I always default to clicking “Custom” in the Timestamp section to add timestamps to the logs. A trick I use often is right clicking on the “Custom” option to add “.zzz” to the current format to display milliseconds in the timestamp. I also like checking off the “Display” option just to have the logging statements show up in the display window as well.

RealTerm: Serial Capture Program settings menu

Tool 2 Option 1 – BareTail

BareTail is a real-time log file monitoring tool that allows users to view log file changes live as they are being written and works well in conjunction with RealTerm. Its “Highlight” feature is particularly helpful for identifying log statements of interest as users can color code log entries based on keywords.

  1. Launch BareTail. Open the log file. You should see log statements being written to the end of the file in real time. The “Follow Tail” checkbox indicates if the view follows the log changes at the end of the file. If you use the scroll bar to scroll up or disable the “Follow Tail” checkbox, the view will stop following the end of the file, but new log statements are still constantly being written to the file.
  2. If you want to color code certain log entries, click “Highlighting.” In the popup window, enter the text of interest and select the font and/or background color. Click “Add” to see how the log file will look and “OK” to apply the color-coding rule.

Note that BareTail is not like any text editor tool in that it does not allow users to edit the log file, which seems reasonable since we do not want to mess with the log when we are trying to analyze the content! It also does not have Search or Filter capabilities. Its Highlight feature aims to point out important log statements and somehow replaces the need for search and filter.

Baretail highlighting window

Tool 2 Option 2 – Notepad++

If you find yourself needing to edit or search within a log file, you may want to use Notepad++ in replacement of BareTail. Notepad++, by the name, is an upgraded version of Notepad and a powerful free source code editor that supports various languages. In addition to its extensible range of editor features, it has an “eyeball” icon that allows users to track log file changes in real-time.

  1. Launch Notepad++ and open the log file. The current content of the file is statically shown up to the time point when the file is being opened.
  2. To view real-time file logging, click on the “eyeball” icon and you should see log entries being written to the end of the file. Toggling the icon will switch between real-time logging and static modes. Note that, unlike in BareTail, you can’t scroll up the file to view previous entries while new entries are being logged. You will need to disable the “eyeball” icon to stop the logging to scroll up, analyze your log, or make any edits. If you want an updated view of the log, you can enable the “eyeball” icon again.
  3. There are two ways to highlight log entries based on keywords. Unlike in BareTail, only the keywords are highlighted instead of the entire log entries.
    1. You can select a keyword in the file, right click to select “Style all occurrences of token” and pick a color/style to apply.
    2. You can also go to Search > Mark to type in the keyword and click “Mark All.” This approach does not provide different color options to pick from; however, it has more complicated search features, e.g. using regular expressions. It also has a neat “Bookmark line” option that adds bookmarks to any line that contains the keyword. You can step through each bookmarked line by going to Search > Bookmark > Next Bookmark. You can clear the keyword highlights and all the bookmarks by going to Search > Mark > Clear all marks and Search > Bookmark > Clear All Bookmarks respectively.

notepad ++  style settings menu

Mark all menu in Notepad ++

Learn more about DMC's Embedded Development and Embedded Programming services and contact us for your next project.

]]>
Debbie Leung Wed, 07 Feb 2024 03:29:00 GMT f1397696-738c-4295-afcd-943feb885714:10557
https://www.dmcinfo.com/latest-thinking/blog/id/10564/dmc-dogs-in-the-office#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10564 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10564&PortalID=0&TabID=61 DMC Dogs in the Office https://www.dmcinfo.com/latest-thinking/blog/id/10564/dmc-dogs-in-the-office A wonderful aspect of working at DMC and going into the office is seeing a furry friend approach you, tail wagging. For as long as many DMC employees can remember, dogs have been allowed to come into the office.

There are only two simple rules to bring a dog to the office. First and most importantly they must behave well. This means that they play nicely with other dogs and humans alike, they don’t destroy things around the office, and they are able to keep an overall peaceful office setting for all. A second much easier hoop to jump through is to get the approval of our Founder & CEO, Frank Riordan.

As time moves on, more companies are including policies that allow pets into office settings. DMC believes they are great additions to the workplace and make all those involved much happier.

Lenny sitting in a chair in Chicago

Lenny lays in a chair supervising the Chicago office

Dunkin laying on his blanket bed in Chicago

Dunkin sits on his blanket bed in the Chicago office

Bently sticking out his tongue in the Chicago office

Bently sticks out his tongue at his owner Nicollette from the other side of the Recruiting & Onboarding team's quad in Chicago

Pepper tired after a long day at the Chicago office

Pepper is tired from a long day at the Chicago office

Many employees at DMC take advantage of this rule and bring their dogs into the office — some visit the office occasionally and others every day.

Allowing dogs in the workplace is great for some obvious reasons. This allows DMC pet owners to not have to stress over their dogs being home alone all day and trying to organize a way to feed them or take them out throughout the workday. The dogs also seem to enjoy the policy as they get to be in a more social environment as opposed to being home alone for the day.

A third beneficiary of this policy is for those in the office who don’t have dogs. Dogs are known to reduce stress and boost human morale. The joy that stems from the sounds of excitement coming from the voices around the office when DMCers get to pet and greet a dog is contagious.

Charlie getting belly rubs

Charlie begs for belly rubs to distract Aaron and others from Chicago's Embedded team 

Ronin having a lunch break in San Diego

Ronin enoys his lunch break in San Diego

Arthur checking floor levelness in Chicago

Arthur checks the Automation team's floor levelness in Chicago

Margaux takes a power nape in the Test & Measurement quad in the Chicago office

Margaux takes a power nap in the Chicago Test & Measurement team's quad

Bailey napping in the marketing quad in Chicago

Bailey naps in the Marketing Quad during her first week in the Chicago office

As a dog owner myself, being able to bring my dog Lenny is an amazing perk of working at DMC. Not only do I enjoy spending time with him throughout the day, but it also helps in situations where he would otherwise be left home alone. Dogs left home alone can become stressed out, so this policy helps avoid stress on both and dogs and their owners!

A day for a DMC Dog is very consistent. They start with a strong breakfast of kibble and some water to wash it down. From there, they can situate themselves in their owner’s quad or office and may even put their toys in places their owner will only find three weeks later or see in a Slack channel. After this, they start their first of many naps. These naps must be taken underneath desks or in high foot traffic areas — to ensure the dogs get the type of attention they are looking to receive while in the office of course.

Occasionally, someone may be in the kitchen and the sound of a dog’s nails on the hardwood is coincidentally heard; these things must be investigated thoroughly. Every couple of hours the dogs must get dressed in their harnesses, some in coats too, to go outside and take care of business. After a group lunch, the dogs have had their fill of food and love, and they can begin their real work: which is supervising. DMC dogs are well trained in this field as they can sit anywhere and watch DMCers fulfil one of our Core Values: Make Things Happen.

The last duty of a DMC dog is to wander around and hope someone pets them, talks to them in a baby voice, and maybe even spoils them with a treat. Frank’s office is a guaranteed treat haven.

With all this talk about dogs, one may consider asking “what about cats?” Well, DMC has that covered too. On certain occasions, cats have also visited the office — some for a few hours and others for an entire day. Cats do need some higher levels of supervision as they can be quite curious, so it often helps to work in a room or closed office to prevent cat-based mischief.

Sasha testing ribbons samples

Sasha tests ribbon samples in Chicago

Pepper in her Halloween costume

Pepper dresses up in her Haloween costume in Denver

Ben and Leon doing a code review in Chicago

Ben assists Leon in a code review

Nola providing IT support in Denver

Nola provides IT support in DMC Denver

Bowser's first day at the office

Bowser's first day in the office was exhausting

Nooner and Luna in a meeting

Nooner and Luna join Elizabeth in a meeting in Seattle

Denver tongue inspections

Denver tongue inspections

Chicago weekly lay down

Chicago's weekly lay down

DMC is a pet-loving environment and is welcoming to them as long as it makes sense, so no pet alligators, I would assume. Our pets also bring a lot of culture to the workplace as they are another topic of conversation and way for employees to interact with each other. With the kind employees at DMC, you can even find a dogsitter if a less dog-friendly meeting must occur at the office.

This policy brings joy to many of DMC’s offices and many office settings could benefit from dogs in the workplace. In DMC’s case, this is one of the best parts of going into the office!

Learn more about DMC’s culture and explore our open positions!

]]>
Thomas Panek Tue, 06 Feb 2024 22:01:00 GMT f1397696-738c-4295-afcd-943feb885714:10564
https://www.dmcinfo.com/latest-thinking/blog/id/10562/dmc-chicagos-night-at-the-museum#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10562 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10562&PortalID=0&TabID=61 DMC Chicago's Night at the Museum https://www.dmcinfo.com/latest-thinking/blog/id/10562/dmc-chicagos-night-at-the-museum DMC's Chicago office recently gathered to celebrate the holidays at the Museum of Contemporary Art

After a dinner of Spanish Tapas & Mexican Street food, guests were free to peruse the galleries of the MCA's current exhibits: Rebecca Morris: 2001–2022entre horizontes: Art and Activism Between Chicago and Puerto Rico, and Atrium Project: Lotus L. Kang

The evening ended with a special, holiday edition of Frank's monthly update "Let Me Be Frank"  and dessert. 

For some, the fun continued with games and drinks at the Headquarters Barcade after party.

Learn more about DMC's corporate culture!

]]>
Aizha Thirus Tue, 06 Feb 2024 18:23:00 GMT f1397696-738c-4295-afcd-943feb885714:10562
https://www.dmcinfo.com/latest-thinking/blog/id/10558/magnemotion-guide-part-9-traffic-jam-prevention#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10558 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10558&PortalID=0&TabID=61 MagneMotion Guide Part 9: Traffic Jam Prevention https://www.dmcinfo.com/latest-thinking/blog/id/10558/magnemotion-guide-part-9-traffic-jam-prevention Previous Installments

In my previous blog, MagneMotion Blog Part 8: Simulation, we discussed track simulation, a valuable tool MagneMotion offers to help test out MagneMotion controls before a track is even assembled.

In this blog, we’ll discuss in detail how to manage traffic in your MagneMotion systems to prevent traffic jams and optimize your track’s efficiency.

Typical Traffic Issues

There are a few typical sources of traffic jams in a MagneMotion track:

Source 1

The first, and often most frequent traffic jam, occurs when too many vehicles are sent to a branch of track. This causes the branch to overflow with vehicles and blocks off the rest of the track.

a traffic jam on a magnemotion track

Source 2

Another less frequent but problematic jam can occur when too many vehicles are trying to circulate through a central loop of track in the system. This can create a situation where no vehicles can pass through.

A traffic jam on a Magnemotion track

Source 3

The final common case occurs when vehicles need to be sent backwards on your track. When not done carefully, this can result in deadlocked vehicles that get in each other’s way.

A traffic jam on a Magnemotion Track

Below, we’ll go through a way to organize your PLC program to prevent any such traffic jams from occurring.

Step 1: Monitoring Stations

The first step is to organize how the PLC program handles calls of the Station AOI.

First, create a data type that will contain the station AOI instance along with the following information:

  • A Boolean tag to enable the station
  • A Boolean command to send vehicles from the station
  • A dint to store the next station vehicles will be sent to
  • A dint to store the number of vehicles headed to the station
  • A dint to define the number of vehicles allowed to go to the station at a time

Now, create an arrayed tag that will store all of this information for each station on your track.

MagneMotion arrayed tag

Keeping all of the station information in a single centralized location will make it much easier to handle any complex routing logic.

Next, you need to create some structured text logic to monitor each station for incoming vehicles.

structured text logic

By looping through the mover array in the MagneMotion device handler, you can check how many vehicles are headed to any given station. This will give you valuable insight into which sections of your track might be in danger of developing traffic jams without having to deal with the tricky logic of manually keeping track of all of the vehicles sent to a station.

Step 2: Monitoring Paths

The second step in this setup is to create a similar udt to the one you made for stations. This path udt will contain the following information:

  • A variable for the contents of a path
  • The size of the path
  • A Boolean for whether or not the path is full

Path UDT in Magnemotion

Next, you’ll create another structured text code for monitoring the contents of each path.

Structured text code for monitoring path contents.

The above code will determine how many vehicles are currently on each path, allowing you to determine which sections of your track are already filled with vehicles.

For efficiency’s sake, you can combine the path monitoring and station monitoring code into a single for loop.

Step 3: Check if a Station is Full

So now that your PLC code is monitoring the status of all your stations and paths, how do you use that information to robustly manage your MagneMotion track?

If you populate iPathSize and iNumAllowedIncoming values for your paths and stations, you can use some fairly simple logic to determine when a station is able to accept any new material.

First, create a simple for loop to determine whether any paths are filled with vehicles.

Structured text for MagneMotion loop

Next, create some complimentary logic for determining whether any stations have too many vehicles being sent to them.

Loop logic for Magnemotion track.

Note that, to check if a station is full, we must also check to see whether the path it’s on is full. This way, we will account for cases when a station is being backed up by vehicles that are going to another station.

Also note that, if a path’s size or a station’s number of allowed incoming vehicles is 0, we will assume that the station/path is not full. This allows us to ignore paths and stations that will never cause traffic issues and reduce some of the overhead in setting up your track’s program.

Step 4: Sending Vehicles

Since you have information on each station, you can use this information to check a station’s availability before you send another vehicle to it. You can do this piecemeal by just checking the bStationIsFull bit before directing vehicles to particularly problematic stations, but I find it’s usually better to integrate this check into your station logic for a more robust solution.

Station logic overview in MagneMotion

Compare the above logic to the station logic in MagneMotion Guide Part 4: Using Path and Station AOIs. Here, we will check whether the target destination is full before we send the current vehicle at this station to any destination. This way we can be confident that we won’t cause any traffic jams with the vehicles that are leaving this station.

Also note how we latch the bit for checking if the destination station is full as we send a vehicle to it. This ensures that a station will only be sent one vehicle in a single PLC scan so that we don’t send multiple vehicles to one station before it's capacity can be re-evaluated. If the station has room for multiple vehicles the bStationIsFull bit will be reset in the next scan, allowing another station to send its vehicle.

Step 5: Preventing Loop Blockage

The issue of too many vehicles stopping up a loop of track was also brought up in MagneMotion Guide Part 7: Traffic Lights. Here we will go into more detail about how to actually develop the logic to trigger traffic lights.

Traffic jam on a Magnemotion Track

Using the path information compiled in Step 2, we can easily determine how many vehicles are in the loop of track.

From here, it is fairly straightforward to set up a basic system where when the number of vehicles in the loop exceeds a set value, we turn on traffic lights at each entrance to the loop. At this point, no more vehicles will be able to enter the loop while vehicles will still be able to exit the loop freely.

Structured text code.

Step 6: Backup Permissions

If you find you need to send a vehicle in reverse direction down your track, you can also safely do this by monitoring path contents.

For example, say you have a vehicle on a branch of your track that needs to back up back onto your main loop of track before continuing along.

If you set up a traffic light on the main section of track and turn it red while there are no vehicles on the path you need to back up onto, you can be confident that you can back up your vehicle without causing a deadlock with a vehicle headed the other way.

MagneMotion track Overview

Overview

Now that your PLC program is set up to monitor paths and stations, you are ready to further develop your MagneMotion track for maximum efficiency. These station and path checks are not meant to compose an entire routing system, but they are valuable building blocks for defining your own routing system.

While only a few use cases for station and path monitoring have been shown here, the general tools these tactics provide can be used in a variety of ways and different systems to help guarantee that your track runs smoothly.

Learn more about DMC's MagneMotion expertise and contact us for your next project. 

]]>
Jack Haskell Tue, 06 Feb 2024 18:17:00 GMT f1397696-738c-4295-afcd-943feb885714:10558
https://www.dmcinfo.com/latest-thinking/blog/id/10563/dmc-quote-board--february-2024#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10563 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10563&PortalID=0&TabID=61 DMC Quote Board - February 2024 https://www.dmcinfo.com/latest-thinking/blog/id/10563/dmc-quote-board--february-2024 Visitors to DMC may notice our ever-changing "Quote Board," documenting the best engineering jokes and employee one-liners of the moment. 

Learn more about DMC's company culture and check out our open positions!

]]>
Sofia Sandoval Tue, 06 Feb 2024 16:47:00 GMT f1397696-738c-4295-afcd-943feb885714:10563
https://www.dmcinfo.com/latest-thinking/blog/id/10561/fun-at-dmc--volume-20#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10561 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10561&PortalID=0&TabID=61 Fun at DMC - Volume 20 https://www.dmcinfo.com/latest-thinking/blog/id/10561/fun-at-dmc--volume-20 Check out all the fun DMCers have had over the past month! 

Chicago 

The Chicago office went skiing at Alpine Valley

Chicago Skip Trip

DMC Chicago also gathered to celebrate the holidays at the Museum of Contemporary Art.

Washington, D.C.

Washington, D.C. DMCers had their holiday party at Maydan.

DMC DC Holiday Dinner

DMC DC Holiday Dinner

San Diego

DMC San Diego went whale watching! 

Whale watching

Whale watching

San Diego DMCers also went to the USS Midway Museum!

USS Midway

USS Midway

St. Louis 

St. Louis DMCers had their holiday office party! 

St Louis holiday office party

Seattle 

DMC Seattle went to Ballard Brewery Brew and played Beer Bingo.

Seattle Ballard Brewery

Boston

DMC Boston went to Boxaroo and did an escape room! 

DMC Boston at Boxaroo

Boston also had their holiday party at Puritan & Company. 

DMC Boston Holiday Party

Denver

DMC Denver went to the rodeo! 

DMC Denver Rodeo

Dallas

DMC Dallas went to Game Show Battle Rooms! 

DMC Dallas Game Show

Learn more about DMC's culture and explore our open positions

]]>
Jane Rogers Mon, 05 Feb 2024 17:16:00 GMT f1397696-738c-4295-afcd-943feb885714:10561
https://www.dmcinfo.com/latest-thinking/blog/id/10545/converting-plc-5-to-controllogix-with-rslogix-project-migrator--part-one#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10545 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10545&PortalID=0&TabID=61 Converting PLC-5 to ControlLogix with RSLogix Project Migrator - Part One https://www.dmcinfo.com/latest-thinking/blog/id/10545/converting-plc-5-to-controllogix-with-rslogix-project-migrator--part-one Upgrading legacy systems is a nerve-wracking experience. Will the hardware swap go smoothly? Will wiring errors be introduced during the swap? Just how many errors will the programmer make when converting hundreds (or thousands) of rungs of logic? And most importantly, how much downtime will this cause!?

Thankfully, some PLC OEM’s offer software and hardware packages that make this daunting task much more approachable. One such tool is Rockwell’s RSLogix Project Migrator.

In this series of blog posts, I will walk through a PLC 5 to ControlLogix conversion using this tool. As you’ll learn, this substantially eases the conversion process, but there are still many gotcha’s along the way to keep in mind. So get your PLC 5 program ready, and let’s start!

Preparing the RSLogix 5 file for Migration

Delete Unused Memory

Before using the Project Migrator, we need to prepare the PLC 5 program for migration. First up is ‘Deleting Unused Memory,’ an optional but helpful step to save controller memory. This works by removing unused datafile elements from the program.

For example, the integer data file below is 30 elements long; however, the program only references N9:0 and N9:19.

Fine N9 Usage window

Deleting unused memory will reduce the size of this data file by 10 elements, removing N9:20 through N9:29 as shown below.

Delete unused memory window

To remove unused memory:

  1. Begin by opening the program in RSLogix 5.
  2. In the upper menu ribbon, navigate to Tools > Delete Unused Memory.
  3. Click ‘Preview’ to preview which memory will be deleted.
    1. Prior to any deletions, confirm that the elements and files are not written to by SCADA or any other external systems!
  4. Click ‘Start’ to delete unused memory from the program.

After deleting unused memory, datafiles will be sized to include all memory up to their last used element. My integer datafile now includes just 20 elements, including the last used element N9:19. Be careful not to delete files that appear to be unused but are in use by the SCADA system.

N9 Usage window

Remove any SFC or STX routines from PLC 5 Project

Export the Logic to a PC5 file

Prior to conversion, the project must be exported to an extension accepted by the Project Migrator Wizard. For PLC 5’s, that extension is ‘.PC5’. If you want to include comments and symbols in the conversion, you will also need to export a TXT file. This can be done separately, but we’ll proceed with exporting the PC5 and TXT files at the same time.

To export the PC5 and TXT files:

  1. In RSLogix 5, navigate to File > Save As.

Save program as window

  1. In “Save as type”, first select X5 so the “Export database” option is no longer greyed out.
  2. Check the “Export database” checkbox.
  3. Under “Export File Type,” select A.B. 6200 for PLC 5 programs.
  4. Back in “Save as type”, change the selection to PC5.
  5. Select “Save” and the Export PC5 Format dialog box will appear.
  6. Select “Complete Program” and save as the Export Mode.
  7. Leave all 3 Export Options checked.
  8. Select “Ok” to complete the export.

Converting the Project Using the Project Migrator Wizard

With the PC5 export complete, you can open up RSLogix Project Migrator and begin the migration! The first screen you’ll see in the Project Migrator is the file selection screen. Here you will select the file type (in our case, PLC-5), browse for the PC5 export you created in the previous step, and hit ‘Next’.

RSlogix project migrator step 1

Step 2 of the migration process gives you the option to create alias tags for existing PLC-5 symbols. This is dependent on each individual application, so choose whichever is appropriate for yours.

RSlogix project migrator step 2

Step 3 parses the PC5 and TXT files for exporting to Logix 5000. If the migrator runs into any issues while parsing the files, a popup will appear that highlights the syntax error and provides an editing window to correct it. 

RSlogix project migrator step 3

Once the PC5 and TXT files are successfully parsed, you can choose a destination for the output file, the controller type, and the firmware version. If the controller and firmware you will be using are not available, you can easily change them in RSLogix 5000 once the migration has been completed.

RSlogix project migrator step 4

The next screen shows the status of the migration to the l5k file. This typically takes just a few seconds. From here, click ‘Launch RSLogix 5000’ to begin the l5k to ACD import.

rslogix project migrator step 6

After clicking ‘Launch RSLogix 5000’, Logix 5000 will open, and you will have the option to name your ACD file and select a firmware version. Because revision 27 was the highest available firmware option in the project migrator, I took this opportunity to choose the final revision for the project. In my case, revision 33.

Once you have selected the file name and firmware revision, click ‘Import’ and Logix 5000 will import the migrated l5k file to an executable ACD file extension.

Save imported project as window

After completing the import, you may see the error popup below, but it has no effect on the l5k to ACD import. Just click ‘Ok’ and start exploring your newly migrated Logix 5000 program!

At this point, you can update the controller type if the correct controller was not available in the migrator.

Logix designer window

Congratulations, you have successfully migrated your PLC 5 program to Logix 5000; however, you may notice when you try to compile your program you get a bunch of Program Conversion Errors (PCE’s), as shown below.

error window

When the migrator encounters a rung of logic that requires extra attention, it inserts a PCE error. These are introduced for a variety of reasons. For example, PLC 5 programs use 0.01s and 1.0s bases for its timers, but Logix 5000 uses 0.001s base timers.

While the migrator updates the presets of a timer if it is hardcoded, it may not catch presets that are variable due to SCADA setpoints or variable due to calculations. It is up to the programmer to ensure the original logic is preserved for each PCE error. This will be covered in part 2 of this blog series.

Learn more about DMC's Legacy PLC Upgrade and Conversion Services and contact us for your next project.

]]>
Danny Langley Mon, 05 Feb 2024 16:11:00 GMT f1397696-738c-4295-afcd-943feb885714:10545
https://www.dmcinfo.com/latest-thinking/blog/id/10497/an-introduction-to-node-red-processing-and-sending-plc-data-to-the-cloud#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10497 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10497&PortalID=0&TabID=61 An Introduction to Node-Red: Processing and Sending PLC Data to the Cloud https://www.dmcinfo.com/latest-thinking/blog/id/10497/an-introduction-to-node-red-processing-and-sending-plc-data-to-the-cloud In this blog, I'd like to introduce a prominent software in the realm of Industrial IoT applications: Node-Red. In a sentence, Node-Red is a graphical development tool that is quickly becoming the industry standard for IIoT applications. It's referred to as a "graphical development tool" because you are essentially writing JavaScript code with nodes instead of text. Each node is a visual programming element. Its capabilities for PLC data-processing are broad but, fundamentally, the software is very straightforward to use.

Before we discuss the functionalities of Node-Red, it's important to address one question: when would I use Node-Red? Node-Red is most commonly used in the following cases:

  1. A client wants to store PLC data in the cloud. This could be logging data in an Azure Hub database and then using that data to generate PowerBI models.
  2. A client wants to be remotely alerted when something in the PLC logic occurs. Perhaps they want to be notified when a temperature sensor reads a certain value and take action accordingly.
  3. Both of the above!

There are edge cases where Node-Red is used to remotely control PLC tags, but this is typically not recommended. In short, any time you have PLC data that you want to process and send to the cloud (whether to store it in a database or notify you directly), Node-Red will do the job.

It's important to note that Node-Red needs some hardware to connect to your PLC. This could be a PC or IoT device that is connected to your network. In my case, I used a Siemens IoT2050 Advanced that came with Node-Red pre-installed.

The overall layout of the Node-Red is relatively simple. On the left, you have your toolbox of "nodes". Each provides a different functionality, and there is a large library of community-made nodes that you can import using the "manage palette" option in settings.

In the center, you have your working area. This is where you will drag in nodes to write your processing logic. Notice the tabs at the top of the working area; each tab is called a "flow". These are like pages in an Excel sheet or function blocks in a PLC program. Each flow can store tag data in its own memory, like the internal memory of a PLC function block.

The various icons on the top-right corner provide you with a variety of details about your project, but the most important one to have open is the debug window. The debug window acts similarly to the console window in any programming environment. This window is where you will see the outputs of your code. You can locate this by navigating to the beetle icon.

The last important feature of the Node-Red layout is the big red "Deploy" button. This runs your code. Node-Red behaves similarly to a PLC; once you deploy your logic, it will run continuously until you re-deploy the program with any changes.

Debugging: Inject Node & Debug Node

Before we can discuss some basic Node-Red logic, we should understand two essential debugging/early development nodes: Inject and Debug.

Node-Red logic is initiated by a message being sent. In practice, this might be a PLC Boolean flipping to "True". For troubleshooting purposes, it's useful to be able to inject a message whenever you want to initiate a flow. This is where the inject node becomes crucial for initial logic development.

While the inject node is essential for starting a flow, the debug node is essential for observing the output of a flow. It would be impossible to troubleshoot without knowing what your output is. Wiring your node logic into a debug node sends the message payload into the debug window. Any node's output can be wired into a debug node. This gives you the ability to observe your message payload at every stage of your processing logic.

Message Structure

There's one more topic I'd like to discuss before we get into some basic logic examples. I believe it's important to understand the structure of Node-Red messages. In the previous paragraph, I mentioned that a Node-Red flow could be triggered by a PLC Boolean flipping to TRUE. In Node-Red, the message would consist of two parts. The first is the "topic". When you configure your PLC tag import node, you'll assign a label for the PLC tag. Whatever you choose to label your tag will become the message topic.

The second part of the message is the "payload". This is the contents of the message. For a Boolean, this would be TRUE or FALSE. Node-Red message payloads can come in many forms. For more complex payloads, the payload is often a JSON object or JavaScript string. This message structure is important when trying to do message customization.

Each Node-Red message is a JavaScript "msg" object. Within various nodes, you'll frequently see "msg.payload" being referenced and/or altered. This should make some sense intuitively. The important contents of the PLC tag will live exclusively in the "payload" key of the "msg" object. Most data-processing logic will, then, deal exclusively with the message payload. So why do we need this "topic" element? Message topics can be very useful in data filtering contexts, and I will give you a simple example of such contexts in the following section.

Data Filtering: Switch Node

Now, we can finally get into some basic Node-Red logic.

The most basic level of data processing is filtering. In the first rung, I'm injecting the integer 3 to initiate my flow. The yellow node you see is a "switch" node. The switch node allows you to filter data based on conditions. In this case, I'm only allowing the payload to continue through the switch node if its value exceeds 1000. Clearly, 3 would not meet the condition and thus the debug node would output nothing to the debug window. 

The above image shows the configuration of the switch node. Note how I'm checking the property of "msg.payload" against the condition ">= 1000" — the second and third rungs of the data filtering logic add in an element of message topic filtering. In those examples, I only want data with topic "RPM" to pass. As such, I have added a switch node in series that is configured to check if "msg.topic" equals "RPM".

Payload Configuration: Change Node

While data filtering is the most fundamental data processing function, outputting a naked "TRUE" or "1032" to your database is not usually ideal. Typically, you want to take your PLC data and either combine it with other tags or transform it into a more human-friendly form. For example, instead of a contextless payload of "74," I might instead prefer the payload to say: "Room Temp (F): 74." The "change" node helps you do just that. 

In this flow logic, we have some Boolean input giving us a "TRUE" or "FALSE". We use a switch node to route the flow according to the payload value. If the value is "TRUE," we send the message through the first output. Otherwise, route the message to the second output. The change node looks very similar to the switch node, but its function is very unique. As the name implies, we are usually changing the payload itself.

The change node here is configured to change the value of the payload "TRUE" to the string "The motor is on". While this example shows a simple functionality, the change node is extremely powerful. This node enables you to save payloads to Node-Red memory, construct payload objects, and much more. I will provide an example of saving to Node-Red memory later on in this blog.

In Practice: S7-In Node & MQTT-Out Node

The inject and debug nodes are useful for troubleshooting, but they do not have any use during actual operation. In practice, we need to use a PLC tag-importing node and some network-out node. In this example, I've used the Siemens S7-In node and the MQTT-Out node. These are the entry and exit points of Node-Red. Raw PLC data comes in through the S7-In node, and processed payloads are sent out through the MQTT-Out node.

Instead of an inject node in the previous examples, you would use the corresponding PLC-In node for your system. The configuration of this PLC-In node is simple: fill in the IP-address of the PLC in the "Connection" tab and the PLC tag address(es) in the "Variables" tab.

When addressing PLC tags, you should reference this Node-Red documentation. You can configure this node to inject the PLC tag every so often or only when the tag changes value.

Additionally, you can configure the node to inject every configured PLC tag or one specific tag. Here, I've configured it to only inject the "bDMC" tag that sits in a PLC data block. It is worth noting that for a Siemens PLC, Node-Red can only pull tags from an unoptimized data block. 

The pink node on the right is the MQTT-Out node. Notice how it's wired in parallel to the debug node. This way, whatever is sent through the MQTT-Out node will also show up in the debug window. Each network protocol node is slightly different, so I won't dive into the MQTT node specifically. You shouldn't run into anything out of the ordinary when configuring these nodes.

Storing Data in Node-Red Memory

The last thing I want to touch on is the data storing functionality of Node-Red. Without the use of Node-Red's internal memory, the extent to which you could customize your payloads would be severely diminished.

Consider the following example: I have PLC tags that record the temperature and humidity of a certain room. I want to trigger a flow each time my temperature exceeds a certain threshold. Following the ideas so far, we could easily write a flow that sends the temperature payload to some online database, but what if I wanted the humidity to be sent along with the temperature data? Here, we would need to use Node Red's capability to store data into flow memory.

This is a basic flow that pulls in every configured PLC tag and stores it in "flow" memory. Node-Red has a "flow" and "global" object in which you can store your tags. These tags are persistent across executions of the program.

As the names imply, you can only access data saved within the "flow" object while inside your flow. Data saved within the "global" object can be accessed across all flows. Remember that "flows" are similar to pages in an Excel sheet. In this example, I'm storing every PLC tag within a "myData" object. This object is nested within the "flow" memory. To call the saved tags, I simply address them by appending their topic (whatever you named them) to the flow.myData address.

This is the configuration of the change node used to store data. Note the "flow." prefix is selected from the dropdown menu. It is defaulted to "msg.".

Now that you know how to filter data, configure payloads, and store data in Node-Red, you possess the fundamentals for any PLC data-processing needs. 

Learn more about our Industrial IoT Solutions expertise and contact us for your next project!

]]>
Perry Lin Thu, 01 Feb 2024 07:04:00 GMT f1397696-738c-4295-afcd-943feb885714:10497
https://www.dmcinfo.com/latest-thinking/blog/id/10469/streamline-rockwells-application-code-manager-with-excel#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10469 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10469&PortalID=0&TabID=61 Streamline Rockwell’s Application Code Manager with Excel https://www.dmcinfo.com/latest-thinking/blog/id/10469/streamline-rockwells-application-code-manager-with-excel Rockwell’s Application Code Manager (ACM for short) is well known throughout the controls industry. ACM aids in bulk loading PLC object instances for many types of projects ranging from small repeatable process skid projects to DCS applications.

At first glance, using ACM can seem daunting or tedious when adding one object at a time to build your database, but I’m here to present you with a way to quickly import/export multiple object instances at once using Excel.

With most projects, the customer or integrator will provide an I/O list containing names, descriptions, alarm setpoints, and more. This list has a lot of key information that is usually entered manually in Studio 5000 or via the HMI. This can take a lot of time and introduce typos and mistakes. 

ACM takes this data and enters all of it into .l5x and .ACD files, allowing one to begin programming actual logic quickly. The steps below will instruct you how to streamline your object instantiation and start actual logic programming.

The Process

Once you have the initial project created with your controller, rack, and I/O, you can start assigning instances.

To help move things along, add one instance of every object that you will use for the project. This builds the necessary examples and tabs in the ACM Excel file.

Once your object first instances have been inserted in the project, right click the controller, and select the Export option.

Select the complete project radio button, and then click Export and Open button. 

After a few moments, a Windows save dialog will open and ask you to save the export into a location of your choosing. Click Save and proceed.

Once finished, an ACM Excel file will open, ready to be modified with the rest of your object instances.

Please note that some of these example objects have many columns of data that can be configured, but I will be focusing on the main data inputs that are usually necessary for every type of project. The example below will go through analog inputs. This process can be used for all object types and instances.

Legend -

  • Name – Tag name of the object instance that will be imported into the Studio5000 program.
  • Task and Program – Studio5000 task and program that the object will be inserted into.
  • Description – Object description that will be used for HMI display, alarm messages, and anything else description related.
  • P_Cfg_PVEUMax – The maximum engineering value.
  • P_Cfg_PVEUMin – The minimum engineering value.
  • P_Cfg_PVEU – Engineering units for the instrument.
  • P_Cfg_InpRawMax – the maximum raw input scale, in this case 20mA. This depends on how the input/output card has been configured.
  • P_Cfg_InRawMin – the minimum raw input scale, in this case 4mA. This depends on how the input/output card has been configured.
  • P_Cfg_HiHiLim – High High alarm limit.
  • P_Cfg_HiLim – High alarm limit.
  • P_Cfg_LoLim – Low alarm limit.
  • P_Cfg_LoLoLim – Low Low alarm limit.
  • P_Inp_PV_Address – I/O Assignment. This is the tag for the actual input/output card and channel number. There will be no need to map I/O in Studio5000 after ACM creates the .acd file.

With your I/O list, you can now simply copy and paste the information into the ACM format. This can include alarm setpoints, descriptions, instance names, engineering units, raw units, and anything else that needs and can be configured ahead of time.

Once you have filled out the Excel sheet, you can import the sheet back into ACM. Right click the controller in the class view and select Import.

Select Replace – Overwrite project, navigate to the Excel file you have just updated, and click Import.

After the import finishes, you can expand the object class and see the newly imported object instances. At this point you are ready to generate the initial .ACD file from ACM.

Now, right click on the controller and select Generate Controller.

A popup will appear. Select where you would like to save the .l5x and .ACD files, then click Generate.

A screenshot of a computerDescription automatically generated

Once the generation finishes, you will have a Studio5000 program file that already has all instances created along with their tags. At this point, all that is left is to do is project specific programming like sequences, shutdowns, and anything else required. 

As you can see, using Excel with ACM can quickly streamline projects of any size and help integrators complete projects in record time.

Learn more about DMC's Rockwell Automation partnership and contact us today for your next project!

]]>
Ray King Fri, 26 Jan 2024 15:02:00 GMT f1397696-738c-4295-afcd-943feb885714:10469
https://www.dmcinfo.com/latest-thinking/blog/id/10556/sql-server-performance-troubleshooting#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10556 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10556&PortalID=0&TabID=61 SQL Server Performance Troubleshooting https://www.dmcinfo.com/latest-thinking/blog/id/10556/sql-server-performance-troubleshooting Introduction

You have a SQL Server database, and one or more of your queries are running slow. You need to figure out why. You're not going to find the solution to your specific problem in this blog post, but you will find the first troubleshooting step that may literally save you from spending entire days going down the wrong path to diagnose and solve the problem.

After reading this, you will know the three categories that a slow query can fall into, and how to figure out which of those categories fit your problem query. From there, you'll be able to narrow down your research and other troubleshooting steps to find a solution to your problem faster.

Category 1: The query runs slowly

In my experience so far, most slow queries fall into this category. Queries in this category take a long time to complete because the steps that SQL Server take to execute it (the execution plan) are resource-intensive and/or time-intensive. For example, your query may be doing a Clustered Index Scan on a table with tens of millions of rows, and that can take a long time.

The individual causes of this kind of slow query can include, but are not limited to:

  1. Missing indexes
  2. Poorly written SQL
  3. Excessive indexes (in the case of inserts, updates, deletes)
  4. Parameter sniffing

The actual solution to your problem will be vastly different depending on what the exact cause is, but, if you can determine that your query actually runs slowly, you've already narrowed down your research into the cause and the solution.

Category 2: The query compiles slowly

When you run a query, SQL Server can't execute it until it has an execution plan. An execution plan is the set of steps that SQL Server will take to actually serve your query — the actual, physical and logical steps. This process is called compiling an execution plan. For example, let's say you run the following query:

 SELECT * FROM dbo.Users WHERE CreatedDate >= '2023-01-01' 

To actually execute that query, SQL Server may compile an execution plan that looks something like this:

  1. Non-clustered index scan on the `IX_Users_CreatedDate`
  2. Key lookup with the clustered index
  3. Nested loops to join the results of steps 1 and 2
  4. Parallelism (Gather streams)
  5. SELECT

For most queries, it takes very little time to compile the execution plan, e.g. 30 milliseconds (ms). The query above may only take 1ms to compile; however, more complex SQL queries may take much longer to compile. The worst I've seen in production was 30 seconds, which is abysmal! In fact, when a query has such a long compile time, you may even see a "compilation time out," which means that SQL Server stopped trying to come up with the most optimal plan and just said, "Forget it, we'll roll with what we have so far." With such a query, it may take very little time to actually run, but that up-front cost of compiling the query may be costing you big time: especially if your applications are using the `OPTION(RECOMPILE)` query hint so that the execution plan gets compiled every time.

If it turns out your problem query is returning slowly because of a long compile time, this is actually good news because this can be incredibly easy to solve. It's likely that you're one search engine search away from a solution that will take minutes to implement.

NOTE: A long compile time may not have much impact on your system because SQL Server only needs to compile an execution plan if it doesn't already have one cached in the plan cache. The result is that, for a particularly complex query, it will take 20 seconds to compile the execution plan the first time that query runs, but then SQL Server doesn't need to spend any time compiling a plan for subsequent executions of that same query; however, an execution plan can get flushed from the plan cache for a number of reasons, e.g. the statistics on one of the tables involved in the query got updated. If events like that happen particularly often, then that will turn long compile times into a real problem for end users.

In my time troubleshooting SQL Server performance issues, compile time is pretty rarely the cause of a slow query, but when compile time is the problem, you can easily spend hours or days spinning your wheels if you don't realize the query is just taking a long time to compile.

Category 3: The query is waiting for something

Your production server probably has a lot going on. The query you're running is not the only thing SQL Server has to worry about. There may be some other query that needs to update the same data you need to read, or other queries are hogging all the CPU or RAM on the system, etc. These things, along with a host of other events, will cause your query to have to wait for a lock to be released or some RAM to free up before it can even get started.

In other words, your slow query is not actually slow — it's simply having to wait a long time before it's allowed to start running — but the effect on your applications and your end users is the same. The big difference is that your troubleshooting efforts will probably have to be focused on other queries in order to fix this query. So, if your query is waiting on some lock to be released, you may need to tune a different query so it takes out fewer locks or holds those locks for less time.

If there are many different queries across your system that are running slowly or timing out, chances are that your problem falls into this category.

How do I determine which category my query falls into?

The best way to find out which category a specific slow query falls into is to reproduce the issue by running the query manually in SQL Server Management Studio and capturing the actual execution plan. The execution plan will have very specific information that will help you categorize your slow query. I'll provide instructions for how to do this, then show some examples.

How to get the actual execution plan

  1. Open SQL Server Management Studio and open a query window for the database in question.
  2. Press `CTRL + M` to enable capturing the actual execution plan for your query. There's also a button directly above the query window that you can click — it's the button that says "Include Actual Execution Plan" in a tooltip when you hover over it.
  3. Run your query.
  4. When it's done, there will be an "Execution Plan" tab next to the "Results" and "Messages" tabs below the query window. Click the "Execution Plan" tab to view the execution plan.
  5. Anywhere in the execution plan tab, right-click and select "Show Execution Plan XML". The XML view is the best way to get at the specific information we're looking for.
Analyzing the execution plan

Checking for waits

Once you have your execution plan XML open, you can examine it to see what the query is spending time doing. First, do a `CTRL + F` to do a search for "WaitStats". When you do that search, you may see something like this:

 <WaitStats>     <Wait WaitType="LCK_M_S" WaitTimeMs="38265" WaitCount="1" /> </WaitStats> 

This section of the XML shows us a list of "waits" that happened while trying to run the query. In this case, we see we had an "LCK_M_S" type of wait, and the query waited for a whopping 38 seconds! This query did indeed take 38 seconds to run in SSMS, so I've found the cause of my problem. Now, if I do a search engine search for "SQL Server LCK_M_S", we find that this means that my query needed to take a "shared lock" on a resource (in this case a table) and it had to wait 38 seconds to get that lock. Now I have a very specific direction to go in with my research and troubleshooting. 

There are a lot of different wait types in SQL Server, and you may end up going in very different directions with your troubleshooting efforts depending on which type of wait is making your query slow. In the case of a lock wait like this, I might use a tool like sp_whoisactive to find out what other transaction/query is blocking my query. On the other hand, if your query is spending a lot of time on a "PAGEIOLATCH_EX" wait, you may end up using the "Top Resource Consuming Queries" report in Query Store to find out which query (or queries) are doing the most physical reads.

Checking for compile time

If the total "WaitTimeMs" in the WaitStats section isn't significant, or if there is no "WaitStats" section at all, next you should find out how long the query takes to compile. To do this, in your execution plan XML, do a search for the term "CompileTime". You should find something like this:

 <QueryPlan DegreeOfParallelism="1" CachedPlanSize="1056" CompileTime="5943" CompileCPU="5943" CompileMemory="33896"> 

For a simple query you should see a "CompileTime" of 0ms, or perhaps as much as 10ms, but here we see 5943 milliseconds, which is just shy of 6 seconds! What this means is that it took SQL Server 6 seconds just to put together an execution plan for this query. That's before the query even gets to run. You may see this for very complicated queries, e.g., queries that have multiple nested sub-queries or an exceptionally large number of joins. In one case, I saw a query with a compile time of over 30 seconds in production. Our application-side timeout was 30 seconds, so this particular query was timing out and causing errors for end users.

There's a caveat here: SQL Server should only actually spend that time the first time it runs the query. Subsequent executions should just re-use that same execution plan, skipping all that compile time. So something you'll see if compile time is your problem is that the query is slow the first time you run it, but then the second, third, etc. time you run it is much faster.

Solving this problem can be incredibly low-effort (the last time I ran into it, it took 15 minutes to develop a fix), so, the faster you can identify this as the cause of your problem, the faster you'll have a resolution.

Checking for run time

If you've already eliminated compile time and wait time as the cause of your slow query, that leaves us with the third possibility — it's just a slow query. Let's take the following execution plan as an example:

 <?xml version="1.0" encoding="utf-16"?> <ShowPlanXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Version="1.564" Build="16.0.1000.6" xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan">   <BatchSequence>     <Batch>       <Statements>         <StmtSimple StatementCompId="1" StatementEstRows="1285280" StatementId="1" StatementOptmLevel="FULL" CardinalityEstimationModelVersion="70" StatementSubTreeCost="80.0303" StatementText="SELECT *&#xD;&#xA;FROM Object1" StatementType="SELECT" QueryHash="0x3352E00A1F942050" QueryPlanHash="0xE26E3CB998E16684" RetrievedFromCache="true" StatementSqlHandle="0x0900CE27F0841D17EE29F830B0383DD372BF0000000000000000000000000000000000000000000000000000" DatabaseContextSettingsId="7" ParentObjectId="0" StatementParameterizationType="0" SecurityPolicyApplied="false">           <StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" />           <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="TSQLUserDefinedFunctionsNotParallelizable" MemoryGrant="67552" CachedPlanSize="112" CompileTime="5" CompileCPU="5" CompileMemory="920">             <MissingIndexes>               <MissingIndexGroup Impact="82.7209">                 <MissingIndex Database="Database1" Schema="Schema1" Table="Object2">                   <ColumnGroup Usage="EQUALITY">                     <Column Name="Column3" ColumnId="7" />                   </ColumnGroup>                   <ColumnGroup Usage="INCLUDE">                     <Column Name="Column2" ColumnId="6" />                     <Column Name="Column4" ColumnId="9" />                     <Column Name="Column5" ColumnId="11" />                     <Column Name="Column6" ColumnId="12" />                     <Column Name="Column13" ColumnId="14" />                     <Column Name="Column7" ColumnId="15" />                     <Column Name="Column8" ColumnId="16" />                     <Column Name="Column9" ColumnId="19" />                     <Column Name="Column10" ColumnId="22" />                   </ColumnGroup>                 </MissingIndex>               </MissingIndexGroup>             </MissingIndexes>             <Warnings>               <MemoryGrantWarning GrantWarningKind="Excessive Grant" RequestedMemory="67552" GrantedMemory="67552" MaxUsedMemory="2112" />             </Warnings>             <MemoryGrantInfo SerialRequiredMemory="2048" SerialDesiredMemory="67552" RequiredMemory="2048" DesiredMemory="67552" RequestedMemory="67552" GrantWaitTime="0" GrantedMemory="67552" MaxUsedMemory="2112" MaxQueryMemory="3966864" />             <OptimizerHardwareDependentProperties EstimatedAvailableMemoryGrant="389427" EstimatedPagesCached="194713" EstimatedAvailableDegreeOfParallelism="4" MaxCompileMemory="11859840" />             <WaitStats>               <Wait WaitType="SOS_SCHEDULER_YIELD" WaitTimeMs="16" WaitCount="4469" />               <Wait WaitType="ASYNC_NETWORK_IO" WaitTimeMs="12" WaitCount="2" />             </WaitStats>             <QueryTimeStats CpuTime="17901" ElapsedTime="17919" UdfCpuTime="12925" UdfElapsedTime="12919" />             <RelOp AvgRowSize="2584" EstimateCPU="0.128528" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1285280" LogicalOp="Compute Scalar" NodeId="0" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="80.0303">               <OutputList>                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column1" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column2" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column5" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column6" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column7" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column8" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column9" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column10" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column11" />                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column12" />                 <ColumnReference Column="Expr1005" />               </OutputList>               <RunTimeInformation>                 <RunTimeCountersPerThread Thread="0" ActualRows="1758196" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="15298" ActualCPUms="15293" />               </RunTimeInformation>               <ComputeScalar>                 <DefinedValues>                   <DefinedValue>                     <ColumnReference Column="Expr1005" />                     <ScalarOperator ScalarString="ScalarString1">                       <UserDefinedFunction FunctionName="Function1">                         <ScalarOperator>                           <Identifier>                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column13" />                           </Identifier>                         </ScalarOperator>                       </UserDefinedFunction>                     </ScalarOperator>                   </DefinedValue>                 </DefinedValues>                 <RelOp AvgRowSize="2607" EstimateCPU="10.9251" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1285280" LogicalOp="Right Outer Join" NodeId="1" Parallel="false" PhysicalOp="Hash Match" EstimatedTotalSubtreeCost="79.9018">                   <OutputList>                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column1" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column2" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column5" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column6" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column13" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column7" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column8" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column9" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column10" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column11" />                     <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column12" />                   </OutputList>                   <MemoryFractions Input="0.923425" Output="0.923425" />                   <RunTimeInformation>                     <RunTimeCountersPerThread Thread="0" ActualRows="1758196" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="1730" ActualCPUms="1730" ActualScans="0" ActualLogicalReads="0" ActualPhysicalReads="0" ActualReadAheads="0" ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobReadAheads="0" InputMemoryGrant="61512" OutputMemoryGrant="61512" UsedMemoryGrant="1472" />                   </RunTimeInformation>                   <Hash>                     <DefinedValues />                     <HashKeysBuild>                       <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column14" />                     </HashKeysBuild>                     <HashKeysProbe>                       <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                     </HashKeysProbe>                     <ProbeResidual>                       <ScalarOperator ScalarString="ScalarString2">                         <Compare CompareOp="EQ">                           <ScalarOperator>                             <Identifier>                               <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column14" />                             </Identifier>                           </ScalarOperator>                           <ScalarOperator>                             <Identifier>                               <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                             </Identifier>                           </ScalarOperator>                         </Compare>                       </ScalarOperator>                     </ProbeResidual>                     <RelOp AvgRowSize="135" EstimateCPU="0.0129687" EstimateIO="1.87942" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="11647" EstimatedRowsRead="11647" LogicalOp="Clustered Index Scan" NodeId="2" Parallel="false" PhysicalOp="Clustered Index Scan" EstimatedTotalSubtreeCost="1.89239" TableCardinality="11647">                       <OutputList>                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column14" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column12" />                       </OutputList>                       <RunTimeInformation>                         <RunTimeCountersPerThread Thread="0" ActualRows="11647" ActualRowsRead="11647" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="6" ActualCPUms="6" ActualScans="1" ActualLogicalReads="2552" ActualPhysicalReads="0" ActualReadAheads="0" ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobReadAheads="0" />                       </RunTimeInformation>                       <IndexScan Ordered="false" ForcedIndex="false" ForceScan="false" NoExpandHint="false" Storage="RowStore">                         <DefinedValues>                           <DefinedValue>                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column14" />                           </DefinedValue>                           <DefinedValue>                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object4" Alias="Object5" Column="Column12" />                           </DefinedValue>                         </DefinedValues>                         <Object Database="Database1" Schema="Schema1" Table="Object4" Index="Index1" Alias="Object5" IndexKind="Clustered" Storage="RowStore" />                       </IndexScan>                     </RelOp>                     <RelOp AvgRowSize="2501" EstimateCPU="12.9357" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1285280" LogicalOp="Inner Join" NodeId="3" Parallel="false" PhysicalOp="Hash Match" EstimatedTotalSubtreeCost="67.0843">                       <OutputList>                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column1" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column2" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column5" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column6" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column13" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column7" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column8" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column9" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column10" />                         <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column11" />                       </OutputList>                       <MemoryFractions Input="0.0765755" Output="0.0765755" />                       <RunTimeInformation>                         <RunTimeCountersPerThread Thread="0" ActualRows="1758196" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="1090" ActualCPUms="1090" ActualScans="0" ActualLogicalReads="0" ActualPhysicalReads="0" ActualReadAheads="0" ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobReadAheads="0" InputMemoryGrant="6032" OutputMemoryGrant="6032" UsedMemoryGrant="640" />                       </RunTimeInformation>                       <Hash>                         <DefinedValues />                         <HashKeysBuild>                           <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column14" />                         </HashKeysBuild>                         <HashKeysProbe>                           <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                         </HashKeysProbe>                         <ProbeResidual>                           <ScalarOperator ScalarString="ScalarString3">                             <Compare CompareOp="EQ">                               <ScalarOperator>                                 <Identifier>                                   <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                                 </Identifier>                               </ScalarOperator>                               <ScalarOperator>                                 <Identifier>                                   <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column14" />                                 </Identifier>                               </ScalarOperator>                             </Compare>                           </ScalarOperator>                         </ProbeResidual>                         <RelOp AvgRowSize="51" EstimateCPU="0.0007422" EstimateIO="0.0409028" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="532" EstimatedRowsRead="532" LogicalOp="Clustered Index Scan" NodeId="4" Parallel="false" PhysicalOp="Clustered Index Scan" EstimatedTotalSubtreeCost="0.041645" TableCardinality="532">                           <OutputList>                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column14" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column11" />                           </OutputList>                           <RunTimeInformation>                             <RunTimeCountersPerThread Thread="0" ActualRows="532" ActualRowsRead="532" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="0" ActualCPUms="0" ActualScans="1" ActualLogicalReads="54" ActualPhysicalReads="0" ActualReadAheads="0" ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobReadAheads="0" />                           </RunTimeInformation>                           <IndexScan Ordered="false" ForcedIndex="false" ForceScan="false" NoExpandHint="false" Storage="RowStore">                             <DefinedValues>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column14" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object3" Column="Column11" />                               </DefinedValue>                             </DefinedValues>                             <Object Database="Database1" Schema="Schema1" Table="Object3" Index="Index2" IndexKind="Clustered" Storage="RowStore" />                           </IndexScan>                         </RelOp>                         <RelOp AvgRowSize="2479" EstimateCPU="1.9342" EstimateIO="52.1728" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1758220" EstimatedRowsRead="1758220" LogicalOp="Clustered Index Scan" NodeId="5" Parallel="false" PhysicalOp="Clustered Index Scan" EstimatedTotalSubtreeCost="54.107" TableCardinality="1758220">                           <OutputList>                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column1" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column2" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column5" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column6" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column13" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column7" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column8" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column9" />                             <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column10" />                           </OutputList>                           <RunTimeInformation>                             <RunTimeCountersPerThread Thread="0" ActualRows="1758223" ActualRowsRead="1758223" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="481" ActualCPUms="481" ActualScans="1" ActualLogicalReads="70819" ActualPhysicalReads="0" ActualReadAheads="0" ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobReadAheads="0" />                           </RunTimeInformation>                           <IndexScan Ordered="false" ForcedIndex="false" ForceScan="false" NoExpandHint="false" Storage="RowStore">                             <DefinedValues>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column1" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column2" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column3" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column4" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column5" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column6" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column13" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column7" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column8" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column9" />                               </DefinedValue>                               <DefinedValue>                                 <ColumnReference Database="Database1" Schema="Schema1" Table="Object2" Column="Column10" />                               </DefinedValue>                             </DefinedValues>                             <Object Database="Database1" Schema="Schema1" Table="Object2" Index="Index3" IndexKind="Clustered" Storage="RowStore" />                           </IndexScan>                         </RelOp>                       </Hash>                     </RelOp>                   </Hash>                 </RelOp>               </ComputeScalar>             </RelOp>           </QueryPlan>         </StmtSimple>       </Statements>     </Batch>   </BatchSequence> </ShowPlanXML> 

In this execution plan, we can see from the "WaitStats" section that our query only spent a total of about 28ms waiting. It only took 5 milliseconds to compile, but, if you search for "ElapsedTime", you see that it took 17919 milliseconds, or just shy of 18 seconds, to run. So, this is just a slow query.

At this point we need to analyze the execution plan further to see where all that time is being spent and go from there to solve it. Maybe we need to add a new non-clustered index somewhere, or maybe we can add a new where clause to reduce the number of results on one side of an expensive join. There are many different causes of this problem and many different solutions, and that's outside the scope of this article.

The takeaway

When you run into a database performance issue with SQL Server, it can feel overwhelming to try and troubleshoot it, but the first step is always going to be to gather information. If you start with the steps outlined in this article, you can quickly narrow down the problem and get to an answer faster. Remember, the first step in your troubleshooting journey is finding out what your query is spending so much time doing: compiling, waiting, or running.

Learn more about our Application Development expertise and contact us for your next project. 

]]>
Christopher Olsen Thu, 18 Jan 2024 17:50:00 GMT f1397696-738c-4295-afcd-943feb885714:10556
https://www.dmcinfo.com/latest-thinking/blog/id/10527/using-algorithms-for-efficient-multiplexing#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10527 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10527&PortalID=0&TabID=61 Using Algorithms for Efficient Multiplexing https://www.dmcinfo.com/latest-thinking/blog/id/10527/using-algorithms-for-efficient-multiplexing In Test and Measurement Automation projects, we often have to make numerous, quick measurements for test points on a device being tested, which we make with multiplexers. For a recent project, we needed to measure multiple AC voltages on a Mobile Energy Storage System.

We chose multiplexers that are rated for high voltages, can carry a decent amount current, and are rated by their manufacturer to have their relays switched on and off plenty of times. These are mechanical systems, however, and, if you build enough test systems with multiplexers that constantly switch relays on and off, one of them will eventually fail. Any part of a factory line system failing means downtime, which can lead to losses.

The best approach to test systems that last and deliver value is to expect that they may fail at some point. At DMC, we design tools to delay and diagnose the inevitable as opposed to ignoring it.

To this end, we deployed our project with a diagnostic sequence and additional hardware that the client could use periodically to detect wiring and relay faults so we can fix them. There are several ways in which this idea can be applied.

Take the example of testing a standard 120V AC outlet: we need to check that we see ~120V between live and neutral as well as live and ground but ~0V between neutral and ground.
 
In our case, we mapped DMM and each of the test points to multiplexer coordinate “paths” and those paths to human-readable names through our custom “MUX Manager” library. This enabled us to control relays with Python lines like these:

mux_manager.get_pin_path(<Pin Name>, <Rail>)

mux_manager.set_pin(<Pin Path>, <New State>)

mux_manager.read_pin(<Pin Path>)

mux_manager.clear_all()

If we wrote a generic method in a test class to measure the voltage between two pins, a first attempt would look something like this:

 class VoltageTest:
    …

    def test_voltage(self, pin_a, pin_b, expected, tolerance): 

        self.mux_manager.clear_all()

        path_a = self.mux_manager.get_pin_path(pin_a, Rail.POSITIVE) 

        path_b = self.mux_manager.get_pin_path(pin_b, Rail.NEGATIVE) 

 

        self.mux_manager.set_pin(path_a, True) 

        self.mux_manager.set_pin(self.dmm_positive_path, True) 

        self.mux_manager.set_pin(path_b, True) 

        self.mux_manager.set_pin(self.dmm_negative_path, True) 

        measurement = self.dmm.read_voltage()

        # Clearing up connections

        self.mux_manager.clear_all()

        if (expected - tolerance) < measurement < (expected + tolerance): 

            return "PASS"

        else:

            return "FAIL"

We need to clear up the connections at the end of each function call to leave the multiplexer in a clean state so that calls to test_voltage can be rearranged in a test sequence by an engineer without worrying about what pins are previously enabled. On the other hand, when we imagine how such a method would be used, we start to see some redundancy. In testing the 3-pin outlet, we would have to enable and disable DMM pins 12 times and get similar inefficiencies with the pins being tested.The problem is compounded when this generic function is run hundreds of times in one sequence and that sequence is run hundreds of times. The likelihood of a single relay failing (and therefore risk of downtime) is made unnecessarily high by lazy programming.


 
Instead, we could use our knowledge about our system and the math abilities of Python to eliminate this redundancy with a method in the MUX Manager:

class MUXManager:

    …

    def masked_set_pins(self, pin_and_rail_list):

        high_pins = set()

        for pin_name in self.pin_names: 

            pin_path_positive = self.get_pin_path(pin_name, Rail.POSITIVE) 

            pin_path_negative = self.get_pin_path(pin_name, Rail.NEGATIVE) 

            if self.read_pin(pin_path_positive):

                high_pins.add(pin_path_positive) 

            if self.read_pin(pin_path_negative):

                high_pins.add(pin_path_negative)

        need_to_be_high_pins = set()

        for pin_name, rail in pin_and_rail_list:

            pin_path = self.get_pin_path(pin_name, rail)

            need_to_be_high_pins.add(pin_path)


        need_to_make_low_pins = high_pins - need_to_be_high_pins

        for path in need_to_make_low_pins:

            self.set_pin(path, False)

        need_to_make_high_pins = need_to_be_high_pins - high_pins

        for path in need_to_make_high_pins:

            self.set_pin(path, True) 

By using set differences, we know which relays to turn off and on from a previous state and avoid any calls in the process that would be redundant.

This is not a perfect approach and requires knowledge of the system such as whether you would be performing any hot switching or there needs to be an order to the switch calls.

For our voltage tests, however, this approach was the right one and worked best for our client’s needs. We can rewrite our test_voltage method to incorporate this new method as follows:

class VoltageTest:

    …

    def test_voltage(self, pin_a: str, pin_b: str, expected, tolerance): 

        self.mux_manager.masked_set_high_pins([

            (pin_a, Rail.POSITIVE), 

            ("DMM+", Rail.POSITIVE), 

            (pin_b, Rail.NEGATIVE), 

            ("DMM-", Rail.NEGATIVE),

        ]) 

        measurement = self.dmm.read_voltage()

        if (expected - tolerance) < measurement < (expected + tolerance): 

            return "PASS"

        else:

            return "FAIL"    

Our method is more readable and efficient while reducing the relay switches per measurement. The animation given shows the decreased switching actions needed (from 24 down to 12) with this new approach — which is even more pronounced for outlets with more pins.

 
What I like most about this whole endeavor is that it confronts the fact that it illustrates how thoughtful engineering design can be exemplified in many ways — in hardware and software. By complicating how our software works, our hardware takes fewer steps and lives longer. Good multiplexing is, well, multiplex.

Learn more about DMC's Battery Pack and BMS Test Systems and contact us today for your next project.

]]>
Fadil Eledath Thu, 18 Jan 2024 07:24:00 GMT f1397696-738c-4295-afcd-943feb885714:10527
https://www.dmcinfo.com/latest-thinking/blog/id/10553/visual-studio-code-tips-and-tricks#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10553 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10553&PortalID=0&TabID=61 Visual Studio Code Tips and Tricks https://www.dmcinfo.com/latest-thinking/blog/id/10553/visual-studio-code-tips-and-tricks Visual Studio (VS) Code is one of the most popular integrated development environment (IDE) tools for software developers and is frequently used by engineers at DMC.

In a fast-paced environment where developer velocity is highly valued, there’s no wonder why there is already a myriad of guides out there, like the official Visual Studio Code Tip and Tricks and Desuvit’s blog, that outline tips and tricks to boost engineers’ productivity. The VS Code official documentation even has a PDF with all possible shortcuts.

Such an abundance of information can be overwhelming, however, especially for junior software engineers who are just beginning to develop their own coding routines and practices. Having learned a handful of new shortcuts over the years, here are a few of my favorites that I use on a day-to-day basis to accelerate my developer velocity.

Search

`Ctrl + P`: search file name

`Ctrl + T`: global search for methods across entire project

`Ctrl + Shift + F`: global search for the selected text across entire project

`Ctrl + Shift + P`: access command palette

Editing Code

`Ctrl + Alt + Up/Down`: add cursor above/below

`Alt + Click`: add cursors at arbitrary positions

`Ctrl + D` or `Ctrl + Shift + L`: select the next occurrence or all occurrences of the current selected text

`Shift + Alt + Up/Down`: copy line of code and place it above/below

`Alt + Up/Down`: move line of code up/down

`Ctrl + /`: comment out/uncomment selected code lines

`Ctrl + Shift + V` or `Ctrl + K V`: open a markdown preview of README.md in the same window or a side-by-side view

Navigating Development Environment

`Ctrl + B`: toggle sidebar

`Ctrl + J`: toggle bottom panel

`Ctrl + W`: close window

`Ctrl + \`: split editor window for side-by-side editing

Double Click on a file tab: keep the file open without having to edit it

The last tip has been of great help when I am just browsing files to get familiar with a new codebase. While these are default keyboard shortcuts, VS Code is highly configurable, so you can always customize these keyboard shortcuts by going to File > Preferences > Keyboard Shortcuts or typing `Ctrl + K Ctrl + S`.

Learning a few useful shortcuts is part of the process of developing a coding routine that can help accelerate your developer velocity as a software engineer. What are a few of your favorite keyboard shortcuts in VS Code?

Learn more about DMC's Application Development services and contact us today for your next project.

]]>
Debbie Leung Wed, 10 Jan 2024 17:39:00 GMT f1397696-738c-4295-afcd-943feb885714:10553
https://www.dmcinfo.com/latest-thinking/blog/id/10546/wincc-oa--how-to-create-a-microsoft-sql-server-install-for-nextgen-archiving#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10546 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10546&PortalID=0&TabID=61 WinCC OA - How to Create a Microsoft SQL Server Install for NextGen Archiving https://www.dmcinfo.com/latest-thinking/blog/id/10546/wincc-oa--how-to-create-a-microsoft-sql-server-install-for-nextgen-archiving This two-part blog series is intended to be a step-by-step overview on how to set up and utilize a MS SQL Server and WinCC OA's NextGen Archive (NGA). Information for a general setup exists via the WinCC OA Documentation (see Further Reading/Links), but this walkthrough aims to be more detailed and explicit in the necessary steps. 

  1. How to Create a Microsoft SQL Server Install for NextGen Archiving
  2. How to Configure NextGen Archiving in WinCC OA to use a Microsoft SQL Server

Table of Contents:

  1. Notes/Prerequisites
  2. MS SQL Server and Database
    1. MS SQL Server and Installation
      1. Basic Installation
      2. Custom Installation
    2. MS SQL Server Configuration
      1. SQL Server Configuration Manager
      2. Microsoft SQL Server Management Studio
    3. Database Creation
  3. Further Reading/References

1. Notes/Prerequisites

Required programs:

  • Microsoft SQL Server (Installation instructions in part 2a)
  • Microsoft SQL Server Management Studio
  • Microsoft SQL Server Configuration Manager (installed alongside Microsoft SQL Server)

This demo was implemented using:

  • WinCC OA 3.18 P006
  • Microsoft SQL Server 2022 Express
    • NOTE: Other versions of MS SQL may work with NGA, but it has not yet been verified by DMC.
  • Microsoft SQL Server Management Studio 18
  • Windows 11

Assumptions:

  • Proper licensing for NGA is configured
  • The OS user has Windows administrator privileges

2. MS SQL Server and Database

2.1 MS SQL Server and Installation

Back to Table of Contents

  1. Begin the SQL server installation
    1. Download Microsoft SQL Server setup application from Microsoft's SQL Server Downloads page
    2. Run the application and proceed with the Basic or Custom installation. 

WinCC OA - SQL Server 2022 Express Edition

Basic Installation

If implementing the basic installation:

2. On the installation confirmation screen, select the Customize option

Screenshot 2

3. Configure the “Installation Type” Window properties

  1. Select the Add features to an existing instance of SQL Server <Year> option
  2. Select the server you wish to use for NGA connection

NOTE: If a specific server setup window is not specified, feel free to use the default options

Screenshot 3

4. Configure the “Azure Extension for SQL Server” Window properties

  1. Un-check the Azure extension for SQL Server option

Screenshot 4

5. Configure the “Feature Selection” Window properties

  1. Select the SQL Server Replication option
  2. NOTE: Installing this option is the primary purpose of customizing the server
  3. Click Next to begin installation

Screenshot 5

Custom Installation

If implementing the custom installation, follow the steps outlined in the WinCC OA MS SQL® Server Installation documentation.

2.2 MS SQL Server Configuration

Back to Table of Contents

Steps for configuring the SQL server will take place in both the SQL Server Configuration Manager and Microsoft SQL Server Management Studio programs (installed in 2.1 MS SQL Server Installation)

SQL Server Configuration Manager

Within the SQL Server Configuration Manager…

  1. Configure the Server’s TCP/IP Properties
    1. Navigate to “SQL Server Network Configuration/Protocols for <SERVER NAME>”
    2. Double click the TCP/IP item in the “Protocol Name” column to open the “TCP/IP Properties” window
    3. In the “Protocol” tab, toggle the Enabled option to Yes
    4. In the “IP Addresses” tab, specify 1433 for the IPAll/TCP Port option
    5. Click OK to apply and close the window

NOTE: A warning will appear indicating that changes will be applied only after the server is restarted 

Screenshot 6

Screenshot 7

Screenshot 8

Warning WinCC OA Any changes made will be saved

2. Restart the SQL Server

  1. Navigate to “SQL Server Services”
  2. Right click the “SQL Server (<SERVER NAME>)” row and click Restart

Screenshot 10

  1. Download Microsoft SQL Server setup application from Microsoft’s SQL Server Downloads page
  2. Run the application and proceed with the Basic or Custom installation

Microsoft SQL Server Management Studio

Within Microsoft SQL Server Management Studio

1. Connect to the SQL Server

  1. Click the “Connect Object Explorer” icon within the “Object Explorer”
  2. Specify the following parameters in the “Connect to Server” window and click Connect

Parameter

Value

Server type

Database Engine

Server name

<host>/<SERVER NAME>

Authentication

Windows Authentication

 

NOTE: If the “Custom” SQL Server installation was implemented, then connection can be made using the “SQL Server Authentication” method and the specified credentials

Screenshot 11

Screenshot 12

2. Enable dual authentication mode

  1. Right click the server name in the “Object Explorer” and select Properties
  2. Navigate to the “Security” page in the “Select a Page” menu
  3. Under the “Server Authentication” section, select SQL Server and Windows Authentication Mode option
  4. Click OK to apply changes
  5. NOTE: A warning will appear indicating that changes will be applied only after the server is restarted
  6. Restart the SQL Server via the SQL Server Configuration Manager using the steps outlined in Step 2 of the SQL Server Configuration Manager instructions

Screenshot 13

Screenshot 14

Screenshot 15

3. Configure the System Administrator credentials

  1. Under the “Object Explorer”, navigate to “<Database Name>/Security/Logins”
  2. Right click the sa (System Administrator) option and select the Properties option
  3. Under the “General” page of the Login Properties window, specify a password in the Password: and Confirm password: fields
    1. For this demonstration, I’ll use the password $martPeople3xpertSolutions
  4. Optional: Toggle the other general properties as needed or deemed necessary by your security need
  5. Click OK to apply changes

Screenshot 16

Screenshot 17

4. If a red “X” appears next to the sa user in the “Object Explorer”  , then enable sa (System Administrator) login

  1. Within the “Login Properties” window from step 3, navigate to the “Status” page
  2. Under the Settings/Login: option, select Enabled
  3. Click OK to apply changes
  4. NOTE: You may need to refresh the Object Explorer to see the red “X” disappear

Screenshot 18

Screenshot 19

5. Optional: Create a WinCC OA user account

NOTE: This step can be done now, or the user can be automatically created upon database generation (recommended)

  1. Under the “Object Explorer”, navigate to “<Database Name>/Security/Logins”
  2. Right click the “Logins” folder or any of its elements and select the New Login… option
  3. Within the “General” page
    1. Click the SQL Server authentication radio button
    2. Specify the user’s name in the Login name: field
      1. For this demonstration, I’ll use the username winccoa
    3. Specify the user’s password in the Password: and Confirm password: fields
      1. For this demonstration, I’ll use the password $martPeople3xpertSolutions
  4. Within the “Server Roles” page
    1. Check the public server role
    2. Click OK to apply changes

Screenshot 20

Screenshot 21

Screenshot 22

2.3 Database Creation

Back to Table of Contents

The database used for NGA will be auto generated using files that can be found in OA’s base project

1. Locate and copy the required database generation files

  1. Navigate to the “<winccoa>/data/NGA/MSSQLServer/sql folder” within Windows Explorer
    1. Where <winccoa> is the base WinCC OA project
    2. The default install location of the base project is typically “C:\Siemens\Automation\WinCC_OA\3.18”
    3. Therefore, the path I’m using is “C:\Siemens\Automation\WinCC_OA\3.18\data\NGA\MSSQLServer\sql”
  2. Copy the relevant files
    1. schema.sql
    2. db.windows.config
    3. create_database_windows.bat
  3. If your NGA project already exists, copy the relevant files into the same directory in your project
    1. You may need to manually create the relevant folders for “<project>/data/NGA/MSSQLServer/sql”
    2. It should be noted that the files can be copied anywhere, but it’s organizationally preferable to use the location specified above
    3. If you haven’t created a project yet, copy the files into a temporary location and move said files into your project upon project creation (outlined in the 2a. Project Setup instructions)

Screenshot 23

Screenshot 24

2. Modify the config file parameters

NOTE: The db.windows.config file will be used to specify the parameters needed to auto generate the appropriate database

  1. Open the db.windows.config file and with your favorite text-editing tool like Notepad or Notepad++
  2. Modify the parameters as needed.

Parameter

Modify?

Description/Notes

dbServer

YES

This should match the name of the server as specified in the MSSMS’s “Connect to Server” window and the Object Explorer.

Port

MAYBE

The default MS SQL server port is 1433. If you specified a different port during the MS SQL Server Configuration (step 1d), then use that value here

adminUsername

 

If using the “sa” user as the system administrator, this field should not change

adminPassword

YES

Modify this field using the System Administrator password specified during the MS SQL Server Configuration (step 3c)

winccoaLogin

MAYBE

If desired, modify the winccoa login/username. If already configured, then specify the name here. If not yet configured, then a user will be auto generated.

winccoaPassword

YES

If desired, modify the winccoa password. If already configured, then specify the password here. If not yet configured, then a user will be auto generated.

dbName

MAYBE

This will be the name of the to-be-generated database. You can use the default name or specify something a bit more descriptive.

     

sqlscriptpath

YES

This will be the file path of the schema.sql file duplicated in steps 1b and 1c. If the file was not duplicated, you may reference the corresponding file in the base OA project (i.e. <winccoa>/data/NGA/MSSQLServer/sql/schema.sql)

numberType

MAYBE

Can be left as default. Change if desired.

dbInitSize

MAYBE

Can be left as default. Change if desired.

dbFileGrowth

MAYBE

Can be left as default. Change if desired.

logInitSize

MAYBE

Can be left as default. Change if desired.

logFileGrowth

MAYBE

Can be left as default. Change if desired.

logMaxSize

MAYBE

Can be left as default. Change if desired.

     

dbPath

YES

This will be the physical location where the database is stored. The folder location specified must be created if it is not so already. You may opt to use the standard SQL database storage location or locate it in a more accessible location.

dbBackupPath

YES

This will be the physical location where the database backup is stored. The folder location specified must be created if it is not so already. You may opt to use the standard SQL database storage location or locate it in a more accessible location.

It should be noted that, according to the db.windows.config file comments, “The backup folder must be accessible for users under which the WinCC OA is running.”

 

Screenshot 25

3. Generate the SQL database

  1. Run the Windows Command Prompt as administrator
    1. In the Windows search bar, search “Cmd”
    2. Right click the Command Prompt application and select “Run as administrator”
  2. Navigate to the location of the create_database_windows.bat file (in the “<project>/data/NGA/MSSQLServer/sql” folder)
    1. Use cd <folder> in the command prompt to navigate to the appropriate directory
  3. Run the create_database_windows.bat file
    1. Run the create_database_windows prompt to generate the appropriate database

Screenshot 26

Screenshot 27

Screenshot 28

4. Verify that the appropriate database and users were created

  1. Within Microsoft SQL Server Management Studio, you should see:
    1. The newly created database
    2. The generated server-level winccoa user (if it did not already exist)
    3. The generated/mapped database-level winccoa user

Screenshot 29

3. Further Reading/References

Learn more about our Manufacturing Automation and Intelligence expertise and contact us for your next project. 

]]>
Nick Leisle Tue, 09 Jan 2024 17:38:00 GMT f1397696-738c-4295-afcd-943feb885714:10546
https://www.dmcinfo.com/latest-thinking/blog/id/10541/sweet-traditions-dmcs-fourth-annual-cookie-exchange-unwrapped#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10541 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10541&PortalID=0&TabID=61 Sweet Traditions: DMC's Fourth Annual Cookie Exchange Unwrapped https://www.dmcinfo.com/latest-thinking/blog/id/10541/sweet-traditions-dmcs-fourth-annual-cookie-exchange-unwrapped What started as a way to connect with coworkers during the pandemic has become a long-awaited yearly tradition filled with sending and receiving delicious baked goods, sharing family recipes, and making memories. The DMC Cookie Exchange returns this year as the fourth annual exchange!

DMCers Elizabeth Goodnight and Phil Schaffer from DMC Seattle hosted the event and spent countless hours ensuring its success.

“Initially, we were thinking of sending friends cookies, and then we realized our list involved a lot of DMC people,” Elizabeth said. “We thought it would be fun to make it kind of a larger network event!”

Chocolate Cookies for the Cookie Exchange

With DMC locations in 13 cities, it can be difficult to get to know every new employee or employees on other teams, but the Cookie Exchange helps.

“The survey that we send out has a question that asks the participants to share something interesting about themselves,” Phil said. “There are all kinds of little things that we learn from other people. You feel like ‘oh, I’m getting to know this person’ and the answer is something like they’ve been dancing for 20 years. I had no idea and I think ‘huh, that’s cool!’” 

Package of cookies for the cookie exchange

The Cookie Exchange was made for DMCers to build connections with others miles and miles away and has become an event that many look forward to at the end of the year. Although it is called a cookie exchange, all baked goods are happily accepted!

“Last year we got an almond cake from a bakery in Illinois that is known for its almond cake," Phil said. "We loved it because [the cake] was from a place that I’ve never been to or had before, and it was very tasty!” 

Building connections is the main goal of this event.

“Just because somebody doesn’t like to bake or might not be able to for some reason, doesn’t mean that they shouldn’t be able to participate. We tell people to do whatever you want, participate in the way that you want to!” Elizabeth said. "We want to make sure everyone can be included regardless of baking skill or ability, and this is a great example of ways that DMC values inclusivity in all its activities and events." 

Assortment of Cookies

Many types of tasty treats are sent out from DMCer to DMCer each year, including baked goods without certain allergens, so all DMCers may participate.

“Roughly three dozen cookies fit in a medium sized box, and there are roughly 76 boxes being exchanged this year," Elizabeth said. "I’m guessing around 3,000 cookies are being made!”

Some of Elizabeth's favorite cookies to make and send out every year include a mix of family recipes and more obscure cookies.

Recipe Card

Kat Lidrbauch's, Systems Engineer, Cardomom Pistachio Sugar Cookie Recipe

“Every year we make Phil’s grandma’s Christmas cookie that has been a family tradition. Kiffles, which is a Pennsylvania Dutch cookie [that originates from Hungary], is another family recipe we make each year for the box,” Elizabeth said. “We are also doing a Chai Spice Snickerdoodle cookie and a Baklava cookie this year!” 

The meaningful baked goods are Phil's favorite to receive each year.

“I like the baked goods that have sentimental value: a cookies/recipe that somebody cares about – it could be a family recipe, or it brings them joy – and they decide to share it with everyone," Phil said. “There’s something about the way that these baked goods have stories behind them, and that means a lot to me!”

Cookies from the Cookie Exchnage

Sharing a piece of family tradition is the best thing to receive, according to Phil, but baking these sentimental goods and planning for an event like this takes time.

“We try to start the process of sending out the surveys after Thanksgiving," Phil said. "We start at this time of year to give people time to plan and give them a few weekends to get their cookie ingredients together." 

With some DMCers exchanging with three others across regional offices, baking can typically take an entire weekend to get the cookies prepared for shipment. Apart from exchanging cookies and baked goods, there is also an exchange of recipes!

“A couple years ago, Rose in Chicago sent some gingersnaps that were amazing! I immediately threw that into my recipe box, and I’ve kept it. They’re super good!” Elizabeth said with a smile. “[This year], someone sent me a [recipe] for almond thumbprint cookies, and it’s a recipe that his family has been using since the 70s!” 

Cookies from the Cookie Exchange

As the cookie exchange grows, there are already thoughts for ways this event can be even more inclusive in the future!

“I think next year we need to figure out if we can have people who want to receive cookies without baking them,” Phil said. “We can’t sustain making hundreds of cookies and sending them out and then just getting back hundreds of cookies. Maybe next year we’ll add that into the event!” 

The cookie exchange is an event that requires a lot of commitment, but it is also an event that brings a lot of joy to DMCers.

“I think it’s just really fun to make those connections with people who I haven’t met before [and make connections for others with people] who might enjoy talking to each other. [It also allows us to] feel a little bit more community spirit.” Elizabeth said. “It is just really cool to see how excited people get and how much fun they are having while making cookies or sending them!”

Feel like baking cookies? Below you can find a recipe for King Arthur Baking - Sugar Cookies. Let us know if you baked these cookies in the comments below!

King Arthur Sugar Cookie Recipe Card

DMC's Lillian Walker of DMC Boston tried this recipe with a twist! She added a tablespoon of grapefruit zest and some rosemary springs worked into the sugar plus a teaspoon of chopped rosemary. 

Learn more about DMC’s culture and explore our open positions

]]>
Sofia Sandoval Mon, 08 Jan 2024 20:59:00 GMT f1397696-738c-4295-afcd-943feb885714:10541
https://www.dmcinfo.com/latest-thinking/blog/id/10549/dmc-quote-board--january-2024#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10549 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10549&PortalID=0&TabID=61 DMC Quote Board - January 2024 https://www.dmcinfo.com/latest-thinking/blog/id/10549/dmc-quote-board--january-2024 Visitors to DMC may notice our ever-changing "Quote Board," documenting the best engineering jokes and employee one-liners of the moment.

DMC Quote Board - January 2024 

Learn more about DMC's company culture and check out our open positions

]]>
Jane Rogers Mon, 08 Jan 2024 18:49:00 GMT f1397696-738c-4295-afcd-943feb885714:10549
https://www.dmcinfo.com/latest-thinking/blog/id/10550/fun-at-dmc--volume-19#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10550 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10550&PortalID=0&TabID=61 Fun at DMC - Volume 19 https://www.dmcinfo.com/latest-thinking/blog/id/10550/fun-at-dmc--volume-19 Check out all the fun DMCers have had over the past month! 

St. Louis 

The St. Louis office went to the Garden Glow at the Missouri Botanical Gardens.

DMC St Louis at Garden Glow

Washington, D.C.

Washington, D.C. DMCers had their holiday party at Mayden

DMC DC Holiday Party

DMC DC Holiday Party

Chicago

The Chicago Test & Measurement Automation team got into the holiday spirit and put their LabVIEW Icon Editor skills to the test with an ornament-making event! 

Team Members Hanging Christmas Lights on the Tree

Group Photo Around the Decorated Tree

Seattle 

DMCers Elizabeth Goodnight and Phil Schaffer from DMC Seattle hosted DMC's fourth annual Cookie Exchange

Chocolate Cookies for the Cookie Exchange

Recipe Card

Learn more about DMC's culture and explore our open positions

]]>
Jane Rogers Mon, 08 Jan 2024 18:49:00 GMT f1397696-738c-4295-afcd-943feb885714:10550
https://www.dmcinfo.com/latest-thinking/blog/id/10544/the-ultimate-guide-to-using-arrays-in-power-automate#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10544 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10544&PortalID=0&TabID=61 The Ultimate Guide to Using Arrays in Power Automate https://www.dmcinfo.com/latest-thinking/blog/id/10544/the-ultimate-guide-to-using-arrays-in-power-automate When using Power Automate flows, we have become familiar with using the vast array of variable options. Strings, Integers, floats, and even Booleans if the mood strikes. However, one option that is often overlooked is the humble Array. 

You may be familiar with Arrays from your programming days (you have those rights?), and in a lot of ways they act in similar capacities. You can store a series of data points and references or modify them later. However, while that is what comes immediately comes to mind, the Array can store multiple data points in each Array node, allowing you to store tables of data in variable form. 

Why is this good? Why would I need this? Often when we need to store a list or order of operations, the dark temptation of hard coding creeps back from our school days. "Why not just hard code this just once? I am in a rush, and it won't come back to bite me later" are the words of someone who has hidden a snake in their code. Storing these items in an Array allows for a relatively ease of changing run-time configuration options. Other good examples are: 

  • Storing the order for a sequence, such as an approval order. 
  • Storing connection information. 
  • Storing Page or batch sizes for long running processes. 

Declaring an Array 

Creating an Array that stores simple data, is, as implied, pretty simple. 

Choose the "Initialize Variable" option from the list of actions and choose the "array" type. Then format your data as follows: 

[
"Item 1",
"Item 2",
"Item 4",
"Item 3", 
"Items are sometimes out of order"
]

Photo 1 - Array

Photo 2 - Array

Declaring an Array with multiple values.

The more interesting and customizable way of using Arrays is to store multiple data points within a single node of each Array. The only thing that needs to change from the previous step is to use a different format. 

[
{
"DataPoint1":"Please use something funnier than this",
"ClownShoeSize":7,
},
{
"DataPoint1":"That last joke was not funny",
"ClownShoeSize":-1,
},
]

Photo 3 - Array

Photo 4 - Array

With this you can store what amounts to a table of data in this form. You could parse data from a table into an Array or grab items from the Array in order. 

Getting data from Arrays

The easiest way to grab data from the Array is to put the Array into a "Apply to Each" control action on the Array to access each item separately. 

However, while this is very easy for a single value Array, using a multiple item Array takes an extra step. 

With a single item Array, one need just add the variable to the "apply to each" and reference the "current item" in whatever action you're using. 

Photo 5 - Array

Photo 6 - Array

Phot 7 - Array

Photo 8 - Array

You can do the same thing with a multi-valued Array, but you just get a text outline of the node, without easy access to the individual nodes. 

Photo 9 - Array

To get the data out of the multi-valued Array, use the Parse JSON action. Use the multi-valued Array as the input then click the Use Sample paylod to generate Scheme Button. 

Photo 10 - Array

Paste in the value from the variable (this needs to be done only once).

Photo 11 - Array

Click done and the schema is generated for you. 

Photo 12 - Array

Now in a loop you can use the body from the Parse JSON action as the input. 

Photo 13 - Array

Then within that loop, you can access each of the data points of the multi-valued array separately. 

Photo 14 - Array

Photo 15 - Array

Searching Arrays to approximate a key value pair

Sometimes you have an Array that is so large that looping through it would be time-consuming. 

When this occurs, you can use an Array like a Key Value pair. Which, if you know the unique value of a fielld in a node, it can act as a key, from which, you can retrieve a value from the corresponding node. 

First, create a new Array with a field which you can ensure is of unique value. 

Note: There is no way to enforce uniquness in the Array, so this will be entirely dependent on you to ensure that the inputs are valid. 

Photo 16 -Array

Next use the "Filter" action and use the Array as the input. 

Photo 17 - Array

Use the following syntax to reference the jey value "item ()['KeyFieldName']" Then enter the value of your key in the second field. 

Photo 18 - Array

This will return the node in the Array that contains the key you entered. From there you can access the values as noted above. 

Photo 19 - Array

Learn more about our Digital Workplace Solutions expertise and our open positions

]]>
Michael Dannemiller Mon, 08 Jan 2024 18:20:00 GMT f1397696-738c-4295-afcd-943feb885714:10544
https://www.dmcinfo.com/latest-thinking/blog/id/10524/dmcs-test-measurement-team-makes-labview-holiday-ornaments#Comments 0 https://www.dmcinfo.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10524 https://www.dmcinfo.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10524&PortalID=0&TabID=61 DMC's Test & Measurement Team Makes LabVIEW Holiday Ornaments https://www.dmcinfo.com/latest-thinking/blog/id/10524/dmcs-test-measurement-team-makes-labview-holiday-ornaments DMC's Test and Measurement Automation team got into the holiday spirit and put their LabVIEW Icon Editor skills to the test with an ornament-making event! LabVIEW has an icon editor, typically used for labeling code, but the possibilities of a 32x32 box of pixels are seemingly endless.

DMC Chicago's Test and Measurement Team Conference Room Group Photo

After a four year hiatus, the Test and Measurement team were back at it again to create this seasons best LabView ornaments.

"Four years ago they had a very similar event, and I think that one was spontaneous. They had a Teams meeting one day and said 'oh wow, we could actually make these into ornaments!'" Roman Cyliax, Systems Engineer in Chicago, said. "Rachel is the one who brought it back, and we want to make it an annual event." 

Team Members Hanging Christmas Lights on the Tree

The holiday ornament making event was not exclusive to Chicago's Test and Measurement team as other regional office team members were included.

"There were approximately 10 people from Chicago, and we had a few team members join in virtually from Texas and Seattle," Milos Popovic, Chicago Systems Engineer, said. "We used the LabVIEW Development Environment. You can make icons for programs; you draw with this very clunky little icon editor, and you can make a 32x32 pixel image, so it's very pixelated and grainy but normally it's enough information to say what a file does." 

Test and Measurement Team Cutting Ornaments

The team got to work and spent a few hours brainstorming, designing, printing, laminating, cutting, and finally adorning the tree with their festive icons.

"We probably made close to 30 ornaments," Milos said. "We reused a few from the last time we did it just to fill out the tree." 

Decorating the Christmas Tree with LabVIEW Ornaments

From beginning to end, the ornaments took an extensive period of time to curate.

"You make them in the LabVIEW Icon Editor, which is not necessarily the most intuitive place to make an ornament," Roman said. "You may spend about an hour drafting ideas and making ornaments. It took another hour and a half to cut and laminate the ornaments. In total, I think it took around 3 hours for our team to make the ornaments." 

DMC Santa Ornament made with LabVIEW

With dozens of ornaments made, a few stood out from the crowd.

"The ornament was a Christmas Constructor. In LabVIEW you have classes. You construct a class, and it's kind of like a one function block. Usually, we have a stock image that's like a contstruct and there's maybe a little star next to it," Roman said. "Rose made one that had the star being the point of a tree, so it was a Christmas Constructor. It was creative!" 

Contractor Ornament

Happy Holidays Ornament made with LabVIEW

Another favorite was in reference to a quirk in LabVIEW.

"There is this quirk in the Icon Editor itself where you can make stuff transparent, but it doesn't have a transparency button. There is one specific shade of off white that it interprets as transparency. You can accidentally choose this shade of white, and you think it's going to be a white background and then it just ends up being transparent," Milos said. "There is an icon that has the RGB color code for that shade of white. Someone took the time to take an X-acto knife and cut out each pixel that is that shade of white, so it is see through only on the text. That one's pretty good!" 

Group Photo Around the Decorated Tree

After an evening of fun and team bonding, it's clear that this is an event that the team wants to turn into an annual tradition.

"I think we want to make it an annual thing," Roman said. "It'd be very fun to do it again next year!" 

Final Decorated Tree

Learn more about DMC's Test and Measurement services and our company culture!

]]>
Sofia Sandoval Mon, 08 Jan 2024 16:30:00 GMT f1397696-738c-4295-afcd-943feb885714:10524