Posts

Case Study: Siemens, Argentina

Multiple benefits from remote monitoring

In keeping with their Industrie 4.0 strategy, Siemens recently introduced an initiative they call Digitalization, which offers digital solutions “for more efficiency, sustainability, and security.” As part of this initiative, Siemens promotes the use of data-driven services to monitor power plants, helping to ensure a reliable energy supply. Out in the field, Siemens personnel are working to transform those ideas into real-world projects. Here is the story of one project that has improved power plant performance and reduced emissions, while at the same time reducing transportation costs and man-hours.

A few months ago, Alexis Tricco at Siemens Buenos Aires in Argentina undertook their first digitalization project. In his role of providing technical support backup for power plant generation, he and his team are responsible for supervising operations and introducing new technologies to cut costs and ensure greater reliability of the physical plant. In this project, Tricco was tasked with developing a secure and reliable way to collect data from control systems running at power plants located hundreds of kilometers from the Siemens office. The first phase was to be a pilot—to connect his WinCC OA SCADA system to a Siemens T3000 DCS running at a power plant located about 100 kilometers from Buenos Aires. The live data would be used for supervisory control and developing new predictive control strategies.

A significant challenge of the project was that there were two networks involved, the control network and a multi-customer network, connected by an intermediary computer. “My idea was to bring all the process data onto my WinCC OA Server running on the customer network,” said Tricco, “To get this, I needed to replicate the data from the T3000 to the interface PC and from there to the WInCC OA Server. This basic data access was the first stage of the project.”

Siemens Argentina system diagram

For the data communications protocol Ticco chose OPC, because the T3000 had an OPC server and WinCC has an OPC client. However, since OPC DA does not network well, he decided to tunnel the OPC data over TCP, using a company VPN. After reviewing the OPC tunnelling software that could meet his needs, he chose the Cogent DataHub.

“I needed to communicate over different networks, with end points that could convert between TCP and OPC, acting as server and client simultaneously,” he said. “The DataHub has an OPC server on one side and an OPC client on the other side, which is exactly what I needed. The other software I looked at would have required two licenses for each PC. I had to think of the costs.

“What’s more, the DataHub is user-friendly, not complicated to figure out. I just read the manual one time and got it working in less than a day. We did some tests, and when everything was working we presented the idea to company management for their feedback.”

The pilot was successful, and management decided to implement the solution. Tricco can now go online and collect OPC data from the plant’s T3000 DCS and perform analysis in real time. The system is connected to the WinCC OA server in the Buenos Aires main office complex, in the control room for monitoring remote locations. Like Tricco, company engineers can monitor the performance of each of the power plant’s gas turbines, and use the data to optimize combustion and control emissions to meet government regulatory standards. There is no need to go on site.

“Until now, to optimize combustion at a client location site someone had to drive or fly to the site, at significant cost and loss of man-hours,” said Tricco. “Now, we can do it all remotely. In fact, just sitting at home I can connect to our VPN and customize the process in a couple of hours. Getting data from the customer, we can choose which equipment to monitor in which part of the plant, and whether or not to optimize its performance.”

This initial implementation clearly demonstrates the practical value of digitalization for all parties involved. The customer is pleased with the solution, as they know their plants are operating at the highest possible capacity, while actually reducing emissions. Regulatory agencies laud the increased compliance. And, along with a new revenue stream from offering this service, Siemens builds a stronger relationship with the customer. Plans are currently underway to roll out the solution to two more plants immediately, and then expand the program farther afield.

Is MQTT the Answer for IIoT?

Part 8 of Data Communication for Industrial IoT

MQTT, or Message Queue Telemetry Transport, is a publish/subscribe messaging protocol that was originally created for resource-constrained devices over low-bandwidth networks. It is being actively promoted as an IoT protocol because it has a small footprint, is reasonably simple to use, and features “push” architecture.

MQTT works by allowing data sources like hardware devices to connect to a server called a “broker”, and publish their data to it. Any device or program that wants to receive the data can subscribe to that channel. Programs can act as both publishers and subscribers simultaneously. The broker does not examine the data payload itself, but simply passes it as a message from each publisher to all subscribers.

The publish/subscribe approach has advantages for general IoT applications. “Push” architecture is inherently more secure, because it avoids the client-server architecture problem, allowing devices to make outbound connections without opening any firewall ports. And, by using a central broker, it is possible to establish many-to-many connections, allowing multiple devices to connect to multiple clients. MQTT seems to solve the communication and security problems I have identified in previous posts.

Despite these architectural advantages, though, MQTT has three important drawbacks that raise questions about its suitability for many IIoT systems and scenarios.

MQTT is a messaging protocol, not a data protocol

MQTT is a messaging protocol, not a data communications protocol. It acts as a data transport layer, similar to TCP, but it does not specify a particular format for the data payload. The data format is determined by each client that connects, which means there is no interoperability between applications. For example, if data publisher A and subscriber B have not agreed on their data format in advance, it’s not likely that they’ll be able to communicate. They might send and receive messages via MQTT, but they’ll have no clue to what they mean.

Imagine an industrial device that talks MQTT, say a chip on a thermometer. Now, suppose you have an HMI client that supports MQTT, and you want to display the data from the thermometer. You should be able to connect them, right? In reality, you probably can’t. This is not OPC or some other industrial protocol that has invested heavily into interoperability. MQTT is explicitly not interoperable. It specifies that each client is free to use whatever data payload format it wants.

How can you make it work? You must either translate data protocols for each new connected device and client, or you need to source all devices, programs, HMIs, etc. from a single vendor, which quickly leads to vendor lock-in.

The broker cannot perform intelligent routing

MQTT brokers are designed to be agnostic to message content. This design choice can cause problems for industrial applications communicating over the IoT. Here are a few reasons why:

1) The broker cannot be intelligent about routing, based on message content. It simply passes along any message it gets. Even if a value has not changed, the message gets sent. There is no damping mechanism, so values can “ring” back and forth between clients, leading to wasted system resources and high bandwidth use.

2) The broker cannot distinguish between messages that contain new or previously transmitted values, to maintain consistency. The only alternative is to send all information to every client, consuming extra bandwidth in the process.

3) There is no supported discovery function because the broker is unaware of the data it is holding. A client cannot simply browse the data set on the broker when it connects. Rather, it needs to have a list of the topics from the broker or the data publisher before making the connection. This leads to duplication of configuration in every client. In small systems this may not be a problem, but it scales very poorly.

4) Clients cannot be told when data items become invalid. In a production system a client needs to know that the source of data has been disconnected, whether due to a network failure or an equipment failure. MQTT brokers do not have sufficient knowledge to do that. The broker would need to infer that when a client disconnects it needs to synthesize messages as if they had originated from that client indicating that the data in certain topics are no longer trustworthy. MQTT brokers do not know how to synthesize those messages, and since they don’t know the message format, they would not know what to put in them. For this reason alone MQTT is a questionable choice in a production environment.

5) There is no opportunity to run scripts or otherwise manipulate the data in real time to perform consolidation, interpretation, unit conversion, etc. Quite simply, if you don’t know the data format you cannot process it intelligently.

No acceptable quality of service

MQTT defines 3 levels of quality of service (QoS), none of which is right for the IIoT. This is an important topic and one that I have gone into depth about in a previous post (see Which Quality of Service is Right for IIoT?). MQTT might work for small-scale prototyping, but its QoS failure modes make it impractical at industrial scale.

In summary, although the MQTT messaging protocol is attracting interest for IoT applications, it is not the best solution for Industrial IoT.

Continue reading, or go back to Table of Contents

Is UDP the Answer for IIoT?

Part 7 of Data Communication for Industrial IoT

UDP is an alternative to TCP.  So, the question comes down to this: Which is more suitable for Industrial IoT applications: UDP or TCP?  To answer that, we’ll take a quick look at the differences between UDP and TCP.

UDP services provide best-effort sending and receiving of data between two hosts in a connectionless manner.  It is a lightweight protocol with no end-to-end connection, no congestion control, and whose data packets might arrive out of time-sequential order, or duplicated, or maybe not at all.  Nevertheless, UDP is often used for VOIP connections and consumer applications like streaming media and multi-player video games, where a packet loss here or there is not particularly noticeable to the human eye or ear.

TCP, in contrast, provides connection management between two host entities by establishing a reliable data path between them.  It tracks all data packets and has buffering provisions to ensure that all data arrives, and in the correct sequence.  This added functionality makes TCP a little slower than UDP, but with plenty of speed for most industrial applications.  Witness the popularity of Modbus TCP, for example.

Industrial control: higher priorities than speed

In real-time systems, including most industrial control networks, sequence, accuracy, and completion of messages takes a higher priority than speed.  If an alarm sounds, of course you want to receive the information as quickly as possible, but more important is to receive the correct alarm for the correct problem, and to always receive the alarm.  Missing that one alarm out of 100 could mean the difference between shutting off a valve or shutting down the plant.

Industrial IoT is not the same as a low-level control system, but the principle still applies.  Speed is important, but getting the correct information in the correct time sequence is vital.  TCP can provide that quality of service, and is fast enough for virtually all IIoT applications.

Continue reading, or go back to Table of Contents

Remote Control without a Direct Connection

Part 5 of Data Communication for Industrial IoT

As discussed previously, the idea of using a cloud service as an intermediary for data resolves the problems of securing the device and securing the network.  If both the device and the user make outbound connections to a secure cloud server, there is no need to open ports on firewalls, and no need for a VPN. But this approach brings up two important questions for anyone interested in remote control:

  1. Is it fast enough?
  2. Does it still permit a remote user to control his device?

The answer to the first question is fairly simple.  It’s fast enough if the choice of communication technology is fast enough.  Many cloud services treat IoT communication as a data storage problem, where the device populates a database and then the client consults the contents of the database to populate web dashboards.  The communication model is typically a web service over HTTP(S).  Data transmission and retrieval both essentially poll the database.

The Price of Polling

Polling introduces an inevitable trade-off between resource usage on the server and polling rate, where the polling rate must be set with a reasonable delay to avoid overloading the cloud server or the user’s network.  This polling does two things – it introduces latency, a gap in time between an event occurring on the device and the user receiving notification of it, and it uses network bandwidth in proportion to the number of data items being handled.  Remote control of the device is still possible through polling if you are willing to pay the latency and bandwidth penalty of having the device poll the cloud.  This might be fine for a device with 4 data values, but it scales exceptionally poorly for an industrial device with hundreds of data items, or for an entire plant with tens of thousands of data items.

Publish/Subscribe Efficiency

By contrast, some protocols implement a publish/subscribe mechanism where the device and user both inform the cloud server that they have an interest in a particular data set.  When the data changes, both the device and user are informed without delay.  If no data changes, no network traffic is generated.  So, if the device updates a data value, the user gets a notification.  If the user changes a data value the device gets a notification.  Consequently, you have bi-directional communication with the device without requiring a direct connection to it.

This kind of publish/subscribe protocol can support bidirectional communication with latencies as low as a few milliseconds over the background network latency.  On a reasonably fast network or Internet connection, this is faster than human reaction time.  Thus, the publish/subscribe approach has the potential to support remote control without a direct connection.

Continue reading, or go back to Table of Contents

DoublePulsar – Worse Than WannaCry

In a world still reeling from the recent WannaCry attacks, who wants to hear about something even worse?  Nobody, really.  And yet, according to a recent article in the New York Times, A Cyberattack ‘the World Isn’t Ready For’, the worse may be yet to come—and we’d better be prepared.

Reporting on conversations with security expert Mr. Ben-Oni of IDT Corporation in Newark, NJ, the Times said that thousands of systems worldwide have been infected with a virus that was stolen from the NSA at the same time as the WannaCry virus.  The difference is that this second cyber weapon, DoublePulsar, can enter a system without being detected by any current anti-virus software. It then inserts diabolical tools into the very kernel of the operating system, leaving an open “back door” for the hacker to do whatever they want with the computer, such as tracking activities or stealing user credentials.

“The world is burning about WannaCry, but this is a nuclear bomb compared to WannaCry,” Ben-Oni said. “This is different. It’s a lot worse. It steals credentials. You can’t catch it, and it’s happening right under our noses.”

The concern is that DoublePulsar can remain hidden, providing a platform from which hackers can launch attacks at any time.  It may already be running on systems in hospitals, utility companies, power infrastructure, transportation networks, and more.  Ben-Oni had secured IDT’s system with three full sets of firewalls, antivirus software, and intrusion detection systems.  And still the company was successfully attacked, through the home modem of a contractor.

Closing the Door on DoublePulsar

Severity of the threat aside, this scenario points out once again the inherent weakness of relying on a VPN to secure an Industrial IoT system.  Had that contractor been connecting to a power plant, an oil pipeline, or a manufacturing plant over a VPN, it is likely that DoublePulsar could have installed itself throughout the system.  As we have explained in our white paper Access Your Data, Not Your Network, this is because a VPN expands the plant’s security perimeter to include any outside user who accesses it.

This threat of attack underscores the importance of the secure-by-design architecture that Skkynet’s software and services embody.  By keeping all firewalls closed, a cyber weapon like DoublePulsar cannot penetrate an industrial system, even if it should happen to infect a contractor or employee.  SkkyHub provides this kind of secure remote access to data from industrial systems, without using a VPN.

Top 10 IoT Technology Challenges for 2017 and 2018

Gartner, Inc., the IT research firm based in Stamford, Connecticut, recently published a forecast for the top ten IoT technology challenges for the coming two years.  The list covers a lot of ground, from hardware issues like optimizing device-level processors and network performance to such software considerations as developing analytics and IoT operating systems to abstract concepts like maintaining standards, ecosystems, and security.

“The IoT demands an extensive range of new technologies and skills that many organizations have yet to master,” said Nick Jones, Gartner vice president analyst. “A recurring theme in the IoT space is the immaturity of technologies and services and of the vendors providing them.”

Heading the list of needed expertise is security.  “Experienced IoT security specialists are scarce, and security solutions are currently fragmented and involve multiple vendors,” said Mr. Jones. “New threats will emerge through 2021 as hackers find new ways to attack IoT devices and protocols, so long-lived ‘things’ may need updatable hardware and software to adapt during their life span.”

To anyone considering the IoT, and particularly the Industrial IoT (IIoT) or Industrie 4.0, this should be a wake-up call.  As the recent power-grid hack in the Ukraine shows us, old-school approaches like VPNs will not be sufficient when an industrial system is exposed to the Internet. In the IoT environment, Skkynet’s secure by design approach ensures not only a fully integrated approach for the security issues that many are aware of today, but also a forward-looking approach that will meet future challenges.

Having taken security into consideration, there are other items on the list that we see as significant challenges, and for which we provide solutions.  Among these are:

  • IoT Device Management – Each device needs some way to manage software updates, do crash analysis and reporting, implement security, and more. This in turn needs some kind of bidirectional data flow such as provided by SkkyHub, along with a management system capable of working with huge numbers of devices.
  • Low-Power Network Support – Range, power and bandwidth restraints are among the constraints of IoT networks.  The data-centric architecture of SkkyHub and the Skkynet ETK ensure the most efficient use of available resources.
  • IoT Processors and Operating Systems – The tiny devices that will make up most of the IoT demand specialized hardware and software that combine the necessary capabilities of low power consumption, strong security, tiny footprint, and real-time response.  The Skkynet ETK was designed for specifically this kind of system, and can be modified to meet the requirements of virtually any operating system.
  • Event-Stream Processing – As data flows through the system, some IoT applications may need to process and/or analyze it in real time.  This ability, combined with edge processing in which some data aggregation or analysis might take place on the device itself, can enhance the value of an IoT system with little added cost.  Skkynet’s unique architecture provides this kind of capability as well.

According to Gartner, and in our experience, these are some of the technical hurdles facing the designers and implementers of the IoT for the coming years.  As IoT technology continues to advance and mature, we can expect other challenges to appear, and we look forward to meeting those as well.