AWS Outage Calls Attention to Hybrid Cloud

At the end of February Amazon Web Services (AWS) slowed to a crawl for about four hours, causing a major loss of service for hundreds of thousands of websites in North America.  Sites with videos, images, and data files stored on the AWS cloud server suddenly lost much or all of their content, and/or shut down altogether.

After the initial weeping, moaning, and outrage died down, a lively discussion ensued among IT technicians, managers, and concerned citizens on to how to deal with this kind of incident in the future.  The comment section on a story at The Register gives a sample of the kinds of ideas put forward, and there is a clear consensus on a number of them.  Most experts agree that the occasional service outage is one of the inherent risks of using the Internet and cloud services, and that if you need high reliability for your data, you’d better have some kind of redundant or backup solution.

There are normal, accepted ways of building redundancy into a data communications system, including IoT and cloud applications.  One approach mentioned frequently is “hybrid cloud“, a public and a private cloud running simultaneously.  A public cloud is service offered to anyone, typically by a company for paying customers, like AWS.  A private cloud is a service operated and maintained by an individual or company for its own internal use.  To achieve redundancy for AWS in this past outage, a private cloud would have been up and running with a copy of all the company’s data and software, the same as AWS, but just not online.  When AWS stopped serving data, the system would have automatically switched to the private cloud, and someone using the website would not even have noticed.

This is how it works in theory, but building and maintaining a hybrid cloud system that can perform this kind of redundant operation is no small task.  Depending on the level of data and functional replication, in addition to the speed of error detection and  switch-over capability, the hybrid site could cost as much, or even more than the cloud site.  Companies considering such an option would need to do a cost/benefit analysis, based on their specific circumstances.

For Industrial IoT applications a hybrid cloud approach to redundancy may be useful.  Although low-level process control systems should typically not be dependant on the Internet or cloud services, companies who use the IIoT for process monitoring, data collection, or high-level control applications may find it worthwhile to maintain a hybrid cloud.

Skkynet’s SkkyHub service lends itself particularly well to hybrid cloud solutions.  It is possible, and not very difficult, to run a replica system on an in-house server, using the DataHub. Although the DataHub is different from SkkyHub in some respects, for the primary task of data connectivity the two function in an equivalent way.  Readers interested in trying this out are encouraged to contact Cogent for technical tips to ensure a secure and robust implementation.

System Integrators Defend Their IIoT Readiness

A clear sign of a growing opportunity is when people start staking their claims.  Here’s a case in point.  A recent blog in AutomationWorld has caught the attention of system integrators, and from their comments it seems to have rubbed some of them the wrong way.  The blog, The IIoT Integrators Are Coming, by Senior Editor Stephanie Neil, claims that automation system integrators may lose out on IIoT opportunities if they don’t keep up with the technology, leaving the space open for non-industrial IoT companies from the IT world.

Several control system integrators, members of Control System Integrators Association (CSIA), have responded saying that Neil and the people she quotes are mistaken.  They explain the differences between consumer or business IoT and Industrial IoT, and point out that it is easier for a company that knows industrial automation to add IoT to their portfolio than for an IoT company to learn industrial process control. For example, in counter-blog We Are Ready for IIoT, Jeff Miller of Avid Solutions makes the case that his company, at least, is ready.

If nothing else, this conversation provides a useful window into what these potentially key players in the Industrial IoT space are thinking.  On the one hand, some realize that IIoT can be a valuable service to offer their customers, and are gearing up for it.  Others are holding back, questioning the value, reluctant to test the waters, and wondering whether this isn’t just mainly hype that will evaporate in a year or two.  But, according to Neil, if they wait too long, someone else will swoop in and steal their lunch.  And that person or company may be completely outside the traditional world of industrial system integration.

Who is right?

Our take on this is simple.  Both are right.  First, anyone from the IT realm working in IoT needs to know that there is a real difference between regular IoT and Industrial IoT.  An industrial user of the IoT will have special requirements, different and in many cases far beyond what someone might need for a general business or consumer application. At the same time, system integrators must understand that the knowledge required for building an IoT application is highly specialized. It takes a deep understanding of TCP and working with unstructured data, in addition to the critical issue of Internet security.  Above all, we encourage system integrators to keep an open mind, and treat the IIoT as a new opportunity to better serve their customers.

As to the best approach to take, we see at least two: do it yourself, or partner with someone who provides good tools. We won’t stand in the way of the DIY’ers in the crowd, but for those who value tools, we have an easy and cost-effective way to implement the Industrial IoT that works. It does not require integrators to learn new protocols or build security models. It simply connects to in-plant systems and provides the remote data access that automation engineers expect: secure, bi-directional, and real-time, with no open firewalls, no VPNs, and no programming. And it has a revenue-share model for system integration companies that want to enjoy the financial benefits of the IIoT.

Case Study: Schneider Electric FZE, Dubai

Integrating access security and building management.

Schneider Electric specializes in energy management, with products and solutions to help consumers and companies get the most for their energy dollar. In a prestigious Dubai project recently, Schneider Electric FZE engineers used the Cogent DataHub® to integrate a building’s security system with its energy management system to provide state-of-the-art energy efficiency at substantial cost savings.

To implement the project, Schneider Electric’s BAS Field Supervisor, Pradeep Viswanathan and BAS Application Specialist, Duncan McChlery worked closely with Boyce Baine, Technical Support Engineer at Software Toolbox, Cogent’s sales and technical partner for North America, as well as Koshy Thomas, Project Manager at Al Hani Gulf Contracting. Together they implemented a solution in which the DataHub relays information from a Lenel OnGuard security system to Schneider Electric’s TAC Satchwell Sigma building management system.

The Lenel OnGuard security system monitors and controls building security equipment (access, intrusion dectection, and closed circuit TV) while the TAC Sigma BMS handles HVAC, energy management, lighting, elevators, electrical systems, fire alarms, emergency equipment and other energy needs. With the data integration in place, the Lenel OnGuard system can, for example, read data from a Badge ID of someone entering the building, and pass the information to the TAC Sigma BMS to automatically switch on the lights and air conditioning in his office. When he arrives there, the office is cooled and well-lit. Then, when he leaves for the day, the system shuts things down to save energy.

The integration of data for this project required an OPC connection to the Lenel system’s OPC server on the one hand, and a DDE connection to the TAC Satchwell Sigma building management system on the other. Since the DataHub abstracts the data, and converts it from one protocol to another, making the connection was simply a matter of configuring the DataHub to make an OPC client connection to the Lenel system, and a DDE client connection on the TAC system.

“The project was straightforward to implement,” said Mr. Viswanathan. “With the excellent support from Software Toolbox and Cogent for the DataHub, and Al Hani Gulf Contracting and Lenel for their expertise with the Lenel OPC server, we were in very good hands. The system has been online for a few months now, and is working very well. This is where the world needs to go in this age of high energy prices. We not only save money for the customer, but protect the environment as well.”

Case Study: Plastics Manufacturer, Scandinavia

Leading plastics manufacturere uses live process data to optimize production, saving time and materials

One of Scandinavia’s leading plastics manufacturers has chosen the DataHub® from Cogent Real-Time Systems (a subsidiary of Skkynet) to extract data and interact with their state-of-the-art plastic manufacturing equipment. The firm can now access any desired process data for the purposes of engineering analysis and enterprise-level resource planning. The DataHub was the only additional piece of software required to realize substantial savings of time, materials, and production costs.

“The DataHub is exactly the kind of application we needed,” said the project coordinator. “Our system is extensive, and we need to visualize a lot of production parameters. We looked at other solutions but they were too expensive and more complicated.”

plastics-manufacturer-plantWhen the company installed new equipment recently, the necessary system integration grew very complex. Progress was slow. After almost a year they were facing a deadline and had little to show for their time and effort. The goal was to pull together data from 15 machinery units, and feed it in real time into the company’s business processing systems. And if possible, to enable plant engineers to view and work with the live data as well. When they found the DataHub they were pleased to learn that most of the work had already been done.

The first test was to connect the DataHub to an OPC server and put live data into ODBC databases, Excel spreadsheets, and web browsers, as well as to aggregate OPC servers and tunnel data across a network. The DataHub proved to be easy to use and reliable, and it performed remarkably well. The next step was to set up a test system.

The test system connected all of the OPC servers for the plant’s plastics production machines to a central DataHub. Another DataHub at a network node in the engineering department is connected to the central DataHub by a mirroring connection, for tunnelling data across the network. This second DataHub is then connected to an Excel spreadsheet to give a live display of the data in real time. When a piece of equipment machine starts up on the production line, the chart comes to life—cells spontaneously update values and bar charts spring into existence.

The engineering department was able to develop a custom TCP application that uses the DataHub C++ API to make a direct connection from the DataHub to their SQL Server database. Once connected that database gets updated in milliseconds with any change in the plastic-manufacturing machinery. From the SQL Server database the data is accessed by the company’s ERP and accounting software. Using the DataHub in these ways allows the company to:

  • Aggregate the data from all machinery into one central location.
  • Distribute the data across the network to various users.
  • Do decimal conversions of the data as it passes through the DataHub.
  • Put selected subsets of data into Excel for engineers to view and run calculations on.
  • Feed values into a SQL Server database in the company’s IT and business processing system. The OPC points are read-only to ensure a clean separation between the management and production areas.

“This system pays for itself,” said a company spokesman, “and we save money in many ways. We have seen substantial gains in productivity and performance because we can monitor our processes far more effectively. Our accounting and planning departments have, for the first time ever, an up-to-the-second record of actual production variables and statistics. At the same time, our engineering staff can use real-time data in their calculations, and feed the results directly back into the process.”

The DataHub also saved substantial programming costs. The time alone saved on development work has paid for the system many times over. With a single tool the project coordinator has met the various needs of both the engineers and company managers. “The software is easy to install and it works well,” he said. “It’s at the correct level for our needs.”

Case Study: University of California, Berkeley, USA

DataHub is used to integrate data for distributed control of unmanned aerial vehicles

For the past several years, students and faculty at the Vehicle Dynamics Lab (VDL) of the University of California, Berkeley, have been developing a system of coordinated distributed control, communications, and vision-based control among a group of several unmanned aircraft. A single user can control the fleet of aircraft, and command it to carry out complex missions such as patrolling a border, following a highway, or visiting a specified location. Each airplane carries a video camera and an on-board computer, and communicates with the groundstation and the other aircraft in the formation. The control algorithms are so sophisticated that the fleet can carry out certain missions completely autonomously—without any operator intervention.

The control system for each aircraft runs on a PC 104 computer with a QNX6 operating system. Control is divided into three kinds of processes: communication, image processing, and task control. All of these processes interact through the DataHub running in QNX. The DataHub® is a memory-resident, real-time database that allows multiple processes to share data on a publish-subscribe basis. For this application, each process writes its data to the DataHub, and subscribes to the data of each other process on a read-only basis. In this way, each process gains access to the data it needs from the other processes, while avoiding problems associated with multi-processing data management.

For example, the communication software comprises three separate processes: The Piccolo process controls the aircraft, the Payload process communicates with users on the ground, and the Orinoco process handles communications with the other aircraft. Needless to say, each of these three programs needs information from the other two, as well as from the video and task control packages. All of this data is transferred seamlessly through the DataHub.

“The DataHub has contributed a great deal to our software integration,” said Brandon Basso, one of the VDL team members. “Its ability to restrict write privileges to each shared variable of the owner processes avoids many of the difficulties associated with multi-process management.”

For task control, there are two primary software packages: Waypoint controls visits to specified locations, while Orbit handles the orbiting “patrol” of a group of locations. These processes are monitored by a third, supervisory process called Switchboard. In addition to coordinating these processes, decisions must be made by the different aircraft as to which plane will take on which task. The complex calculations needed for this decentralized task allocation are mediated through the DataHub.

Waypoint and Orbit use input from the vision control and vision process. Prior to takeoff, certain algorithms are applied to previously recorded videos, to create a visual profile of the area, which is maintained by the vision control. In the air, this data must be compared to what the plane is currently flying over. A camera on the wing of the plane feeds data to the vision process, which analyzes the content and generates meaningful information about objects on the ground, such as waypoints on a river or road. This live content, along with the stored visual profile in the vision control, is fed through the DataHub to Waypoint and Orbit.

According to the paper, A Modular Software Infrastructure for Distributed Control of Collaborating UAVs, published by the University of California Berkeley which describes it in detail, this project marks “a major milestone in UAV cooperation: decentralized task allocation for a dynamically changing mission, via onboard computation and direct aircraft-to-aircraft communication.” Skkynet is pleased that the DataHub has played an important role in the success of this endeavour.

Case Study: Citect (Schneider Electric), USA

Citect optimizes OPC-based system using the DataHub

A major battery manufacturing plant in the United States was recently faced with an interesting data integration challenge. Management needed access to data coming from a large number of different processes. Over 220 OPC-enabled field devices across the plant had to be connected to a single Citect MES system. The many OPC servers used for these connections are unique in that their data set is very dynamic. From one minute to the next any of the 220 devices may be present or absent in the data set.

citect-logo “Our challenge was to provide data from our dynamically changing OPC servers to a Citect system that is designed to work with a fixed data set,” said the company project leader. They decided to bring in a team from Citect to come up with a solution.

Citect, of Schneider Electric, is well known in the industrial process control world for their line of automation and control software solutions, particularly their MES systems. Dan Reynolds, the team leader for Citect, had heard about the DataHub® through his support department, and thought it might work. They configured the DataHub for OPC tunneling, to communicate across the network without the hassles of DCOM. And, thanks to the DataHub’s unique approach to OPC tunnelling, Dan found that it also solved the problem of providing a fixed data set.


“The DataHub mirrors data across the tunnel,” said Dan, “so the Citect system sees a constant data set. When a device goes offline, the tag remains in the DataHub. Just the quality changes from ‘Good’ to ‘Not Connected’.” Confident in their approach, the Citect team moved the testing from their location to the battery plant. But they soon found themselves faced with another challenge.

The production system is designed so that a field device can add or remove OPC items at any time. So, not only the OPC servers, but individual tags can suddenly appear or disappear from the system. When a new tag comes online, the server updates its tag count, but doesn’t say that a new value is available, because the OPC specification doesn’t require a server to say when a new point is created. This looked like a show-stopper for the configuration team. They knew that there is no OPC product on the market that can deal with that kind of behavior. Continually rereading the data set was not possible, because new points may be added during the read. So Dan got in touch with Cogent (a subsidiary of Skkynet), and working together they came up with a plan.

The solution was two-fold. First, the device behavior was modified to compact the tag add/delete cycle to a limited time. Then Cogent wrote a DataHub script that monitors a few OPC server tags, and when these tags change, a time-delayed function in the script re-reads the server’s data set. The scripted time delay allows for all the new points to be added before the data set is reread, and the DataHub thus discovers all of the new data as soon as it all becomes available.

“We are pleased with the performance of the DataHub for this application,” said Dan Reynolds. “There is no way we could have done this project with any other OPC tunneling product, or combination of products.”

“The Skkynet software has become an integral part of our MES solution,” said the project leader. “Without the DataHub, we would not be getting reliable data. If we hadn’t had it, our MES integration project would probably have come to a halt.”