Posts

Digital Twins Thrive on Data Integration

Digital twins. The term was coined only ten years ago, but the concept is rapidly becoming a must-have in the manufacturing sector. Last year a Gartner poll found that 62 percent of respondents expect to be using digital twin technology by the end of this year, although only 13 percent of them were actually using it at the time. A key factor in this sudden interest is that “digital twins are delivering business value and have become part of enterprise IoT and digital strategies.”

What exactly are digital twins, and why are they getting so much attention lately? A digital twin is made up of three basic components: a physical system, a virtual representation of it, and the data that flows between them. The physical system could be an individual device, a complex machine, a whole production line, or even an entire factory. The virtual representation can be as complex as necessary to represent the system. The data connection keeps the virtual twin as closely in sync as possible with the physical twin, often tracking and updating changes in real time.

The Value and Challenge of Data Integration

A digital twin operating in isolation is useful, but the real rewards come through making connections. Data integration between multiple sub-components of a digital twin, or between multiple digital twins, is key when advancing beyond simple pilot projects. “The ability to integrate digital twins with each other will be a differentiating factor in the future, as physical assets and equipment evolve,” says the Gartner report.

There are at least three types of relationships:

  • Hierarchical, in which digital twins can be grouped together into increasingly complex assemblies, such as when the digital twins for several pieces of equipment are grouped into a larger digital twin for a whole production line.
  • Associational, where a virtual twin for one system is connected to a virtual twin in another system, in the same way that their physical counterparts are interrelated, such as wind turbines connected to a power grid.
  • Peer-to-peer, for similar or identical equipment or systems working together, like the engines of a jet airplane.

Making these connections is not always easy. A recent publication from the Industrial Internet Consortium (IIC), titled A Short Introduction to Digital Twins puts it this way, “Since the information comes from different sources, at different points in time and in different formats, establishing such relations in an automatic way is one of the major challenges in designing digital twins.”

The IIC article briefly discusses some of the technical aspects this kind of integration, such as:

  • Connectivity, the necessary first step for data integration.
  • Information synchronization keeps a virtual twin in sync with its physical twin, and among multiple connected twins, maintaining a history and/or real-time status, as required.
  • APIs allow digital twins to interact with other components of a system, and possibly with other digital twins as well.
  • Deployment between the edge and the cloud pushes data beyond the OT (Operations Technology) domain to the IT domain, that is, from the physical twin to the virtual twin.
  • Interoperability between systems from different vendors may be necessary to gain a more complete picture of the total system functionality.

Another useful resource, Digital Twin Demystified from ARC Advisory Group, identifes data connectivity, collection, tracking volume & fidelity, and ensuring the quality of real-time data as being “key challenges associated with using real-time and operational data” in digital twins.

A Good Fit

Skkynet’s software and services are well-positioned to provide the kind of data integration that digital twins require. Most data on an industrial system is available to an OPC client like the DataHub, which ensures robust connectivity. Virtually any other connection to or between digital twins, such as from legacy hardware or custom software, is possible using the DataHub’s open APIs.

Real-time data mirroring between DataHubs can handle the synchronization needed for tight correlation between the physical and virtual systems. The secure-by-design architecture of DHTP provides a proven way to connect twins across insecure networks or the Internet, even through a DMZ, to ensure the highest level of security for both the physical twin on the OT side, as well as the virtual twin on the IT side.

By supporting the most popular industrial communications protocols, and through secure, real-time data mirroring, Skkynet software and services are often used to build fully integrated systems out of components from different vendors. A recent example of this is in the TANAP project in which DataHub software was used to integrate OPC A&E (Alarm and Event) data from ABB systems with other suppliers, effectively creating a virtual digital twin of the entire 1800 km pipeline.

Digital twinning can be seen as one aspect of the whole area of digital transformation in industry. As companies move towards digitizing their operations, the ability to create a virtual twin of each component, machine, production line, or plant, and connecting that twin to their IT systems will put better control of production into the hands of managers and executives, leading to greater efficiencies. The success of this undertaking, at every step of the way, depends on secure data integration among the digital twins.

Case Study: TEVA API Pharmaceuticals, Hungary

TEVA combines tunnelling and aggregation to network OPC data through a firewall

Laszlo Simon is the Engineering Manager for the TEVA API plant in Debrecen, Hungary. He had a project that sounded simple enough. Connect new control applications through several OPC stations to an existing SCADA network. The plant was already running large YOKOGAWA DCS and GE PLC control systems, connected to a number of distributed SCADA workstations. However, Mr. Simon did face a couple of interesting challenges in this project:

  • The OPC servers and SCADA systems were on different computers, separated by a company firewall. This makes it extremely difficult to connect OPC over a network, because of the complexities of configuring DCOM and Windows security permissions.
  • Each SCADA system needed to access data from all of the new OPC server stations. This meant Mr. Simon needed a way to aggregate data from all the OPC stations into a single common data set.

After searching the web, Mr. Simon downloaded and installed the DataHub®. Very quickly he had connected the DataHub to his OPC servers and determined that he was reading live process data from TEVA’s new control systems. He was also able to easily set up the OPC tunnelling link between the OPC server stations and the SCADA workstations, by simply installing another DataHub on the SCADA computer and configuring it to connect to the OPC server stations.

“I wanted to reduce and simplify the communication over the network because of our firewall. It was very easy with the DataHub.” said Mr. Simon after the system was up and running. Currently about 7,000 points are being transferred across the network, in real-time. “In the future, the additional integration of the existing or new OPC servers will be with the DataHub.”

Case Study: Plastics Manufacturer, Scandinavia

Leading plastics manufacturere uses live process data to optimize production, saving time and materials

One of Scandinavia’s leading plastics manufacturers has chosen the DataHub® from Cogent Real-Time Systems (a subsidiary of Skkynet) to extract data and interact with their state-of-the-art plastic manufacturing equipment. The firm can now access any desired process data for the purposes of engineering analysis and enterprise-level resource planning. The DataHub was the only additional piece of software required to realize substantial savings of time, materials, and production costs.

“The DataHub is exactly the kind of application we needed,” said the project coordinator. “Our system is extensive, and we need to visualize a lot of production parameters. We looked at other solutions but they were too expensive and more complicated.”

plastics-manufacturer-plantWhen the company installed new equipment recently, the necessary system integration grew very complex. Progress was slow. After almost a year they were facing a deadline and had little to show for their time and effort. The goal was to pull together data from 15 machinery units, and feed it in real time into the company’s business processing systems. And if possible, to enable plant engineers to view and work with the live data as well. When they found the DataHub they were pleased to learn that most of the work had already been done.

The first test was to connect the DataHub to an OPC server and put live data into ODBC databases, Excel spreadsheets, and web browsers, as well as to aggregate OPC servers and tunnel data across a network. The DataHub proved to be easy to use and reliable, and it performed remarkably well. The next step was to set up a test system.

The test system connected all of the OPC servers for the plant’s plastics production machines to a central DataHub. Another DataHub at a network node in the engineering department is connected to the central DataHub by a mirroring connection, for tunnelling data across the network. This second DataHub is then connected to an Excel spreadsheet to give a live display of the data in real time. When a piece of equipment machine starts up on the production line, the chart comes to life—cells spontaneously update values and bar charts spring into existence.

The engineering department was able to develop a custom TCP application that uses the DataHub C++ API to make a direct connection from the DataHub to their SQL Server database. Once connected that database gets updated in milliseconds with any change in the plastic-manufacturing machinery. From the SQL Server database the data is accessed by the company’s ERP and accounting software. Using the DataHub in these ways allows the company to:

  • Aggregate the data from all machinery into one central location.
  • Distribute the data across the network to various users.
  • Do decimal conversions of the data as it passes through the DataHub.
  • Put selected subsets of data into Excel for engineers to view and run calculations on.
  • Feed values into a SQL Server database in the company’s IT and business processing system. The OPC points are read-only to ensure a clean separation between the management and production areas.

“This system pays for itself,” said a company spokesman, “and we save money in many ways. We have seen substantial gains in productivity and performance because we can monitor our processes far more effectively. Our accounting and planning departments have, for the first time ever, an up-to-the-second record of actual production variables and statistics. At the same time, our engineering staff can use real-time data in their calculations, and feed the results directly back into the process.”

The DataHub also saved substantial programming costs. The time alone saved on development work has paid for the system many times over. With a single tool the project coordinator has met the various needs of both the engineers and company managers. “The software is easy to install and it works well,” he said. “It’s at the correct level for our needs.”

Case Study: City of Montreal, Canada

DataHub used for connectivity and integration on $10 billion project

Situated on an island in the St. Lawrence River, the City of Montreal in Quebec, Canada has been blessed with an abundant supply of water. Yet ensuring that clean, fresh water reaches the city’s millions of residents every day requires constant attention. In 2004, the City of Montreal embarked on a 20-year, 10 billion dollar project to upgrade the quality of drinking water production and distribution in the city. This initiative includes better metering, infrastructure repair, new purification systems, and plant upgrades. The goal is to improve efficiency throughout the system.

As part of this project, water resource engineers at the Charles J. Des Baillets plant’s head office were recently given the job of integrating the production data from all of the city’s seven pumping stations. Their task was to provide a reliable and secure way to bring key data from those satellite plants into a central control location for storage and analysis.

The data is available on SCADA systems at each pumping plant, accessed through OPC servers. However, networking this vital data proved to be a challenge. Networking OPC DA using DCOM was neither reliable nor secure, so the engineering team decided to use OPC tunneling. They tried several popular OPC tunneling products, and the only one that worked well was the Cogent DataHub®.

The data collection and redistribution architecture that the project planners had in mind was quite complex. Primarily, they needed to collect data from all of the remote stations in a highly secure way, and log it at their central control location. Neither the central client nor anyone else should be able to write back to the OPC servers. They also needed to send the collected data to a third location for the company’s IT staff, and bridge to other OPC servers there. In addition, each pumping station needed to receive some of the data collected from the other pumping stations. And finally, some of the pumping stations were running fully redundant SCADA systems, so they needed redundancy built into the system at those locations.

“We started by connecting the Pierrefonds plant and the central location in Atwater for logging the data, with a second tunnel to the IT office for analytical use of the data,” said the project manager. “We had a few initial issues related to configuration and network addresses, and Cogent’s quick response was very helpful to resolve them. After this first experience with the DataHub, we were very enthusiastic to apply this solution to the rest of the plants in Montreal.”

As each location came online, while they were configuring the tunneling to the central office, the team realized that they had the necessary tool to share the data securely between satellite locations. On the DataHub at the central office they established a separate data domain for each plant, and created a read-only tunnel to receive the data. Then at each plant, they created a read-only tunnel from their local DataHub to the central DataHub to get the data from each of the other plants. This gave the operators at each plant a complete picture of what was going on throughout the system.

“To make intelligent decisions at a satellite plant, it is very helpful to know what’s happening across the city,” the project manager said. “Since all the data was there in the DataHub anyway, we decided to use it.”

With data logging and secure tunneling in place, the next feature to implement was redundancy. Several locations had completely redundant SCADA systems, each with its own OPC server. With help from Cogent, the team was able to establish a connection to the redundant OPC servers such that if one server failed for any reason, the DataHub would start receiving data from the second OPC server.

“The system has been running for months without any problems, logging the data we need to stay efficient,” said the project manager. “We are very pleased with the high quality of the DataHub, its flexibility to do what we need, and with Cogent’s excellent technical support at every point of the way. The data integration aspect of the City of Montreal’s water system upgrade project is meeting or exceeding its goals.”

Case Study: Citect (Schneider Electric), USA

Citect optimizes OPC-based system using the DataHub

A major battery manufacturing plant in the United States was recently faced with an interesting data integration challenge. Management needed access to data coming from a large number of different processes. Over 220 OPC-enabled field devices across the plant had to be connected to a single Citect MES system. The many OPC servers used for these connections are unique in that their data set is very dynamic. From one minute to the next any of the 220 devices may be present or absent in the data set.

citect-logo “Our challenge was to provide data from our dynamically changing OPC servers to a Citect system that is designed to work with a fixed data set,” said the company project leader. They decided to bring in a team from Citect to come up with a solution.

Citect, of Schneider Electric, is well known in the industrial process control world for their line of automation and control software solutions, particularly their MES systems. Dan Reynolds, the team leader for Citect, had heard about the DataHub® through his support department, and thought it might work. They configured the DataHub for OPC tunneling, to communicate across the network without the hassles of DCOM. And, thanks to the DataHub’s unique approach to OPC tunnelling, Dan found that it also solved the problem of providing a fixed data set.

citect-battery-manufacturing-system

“The DataHub mirrors data across the tunnel,” said Dan, “so the Citect system sees a constant data set. When a device goes offline, the tag remains in the DataHub. Just the quality changes from ‘Good’ to ‘Not Connected’.” Confident in their approach, the Citect team moved the testing from their location to the battery plant. But they soon found themselves faced with another challenge.

The production system is designed so that a field device can add or remove OPC items at any time. So, not only the OPC servers, but individual tags can suddenly appear or disappear from the system. When a new tag comes online, the server updates its tag count, but doesn’t say that a new value is available, because the OPC specification doesn’t require a server to say when a new point is created. This looked like a show-stopper for the configuration team. They knew that there is no OPC product on the market that can deal with that kind of behavior. Continually rereading the data set was not possible, because new points may be added during the read. So Dan got in touch with Cogent (a subsidiary of Skkynet), and working together they came up with a plan.

The solution was two-fold. First, the device behavior was modified to compact the tag add/delete cycle to a limited time. Then Cogent wrote a DataHub script that monitors a few OPC server tags, and when these tags change, a time-delayed function in the script re-reads the server’s data set. The scripted time delay allows for all the new points to be added before the data set is reread, and the DataHub thus discovers all of the new data as soon as it all becomes available.

“We are pleased with the performance of the DataHub for this application,” said Dan Reynolds. “There is no way we could have done this project with any other OPC tunneling product, or combination of products.”

“The Skkynet software has become an integral part of our MES solution,” said the project leader. “Without the DataHub, we would not be getting reliable data. If we hadn’t had it, our MES integration project would probably have come to a halt.”

Case Study: Biomass Biotechnology Bio, Japan

Monitoring Nature’s Wonder Workers
Biomass Biotechnology Bio logo

Finally, someone has found a good use for pesky flies—let them eat manure! The BBB company (Biomass Biotechnology Bio) in the Chiba prefecture near Tokyo, Japan, has developed a technology that uses fly larvae to convert manure from pigs and other farm animals into organic fertilizer and high-protein fish food. And they are using the Cogent DataHub running on a cloud server to provide real-time monitoring of their production powerhouse—swarms of flies.

The process is quite simple. BBB keeps thousands of specially bred flies in captivity, collects their eggs, and sells them to local pork farmers. The farmers put the fly eggs on their pig manure, and when they hatch, the fly larvae feed off the manure. Enzymes in the larvae saliva break down the manure into rich, organic fertilizer, doing the job in one week that normally takes up to four months using conventional composting techniques. When the larvae are finished, they don’t need to be separated from the finished fertilizer—they crawl out by themselves, seeking a dry environment. At this point, before they can turn into flies, the larvae are collected, dried, and processed as fish food.

There is an English-language video of the whole BBB process which was produced by the Japanese news agency NHK and is available for viewing on the Internet.

The benefits of producing fertilizer from waste material this way are substantial, but until recently costs have been high. The company plans to expand their services to large numbers of farms, and to do so they need an inexpensive, automated way to monitor their production environment. Unlike most of us who use window screens to keep flies out, BBB has special screened rooms to keep flies in. To ensure the flies stay healthy and lay large numbers of eggs, the air temperature and humidity in these rooms must be maintained at optimal levels, and monitored around the clock.

To automate the monitoring, Cogent and their partner, Nissin Systems Co. Ltd of Kyoto Japan, provided a real-time, cloud-based system using the Cogent DataHub® and WebView™. At the BBB facility they installed a Wi-Fi-enabled environmental sensor module from TOA Musendenki to measure the temperature and humidity, and connected it directly to a Cogent DataHub running on a cloud server. Using WebView, they then created a monitoring page to track key environmental variables such as temperature and humidity in the flies’ living quarters.

“Monitoring our system on the web is very convenient,” said Mr. Yamaguchi, President of BBB. “We have been able to reduce our costs significantly, which will be even more important as we expand our operation.”