Industrial Product Servitization Via the IIoT

Now there’s a ten-dollar word for you: “servitization.” It has emerged from the trend of industrialized societies to move away from manufacturing-based economies towards service-based economies. Applying this trend to products, the term “servitization” was popularized by Tim Baines at Aston Business School, who sees a “product as a platform for delivering services.” IBM shifts its focus from selling computers to selling business services. Rolls Royce sells propulsion instead of jet engines. Alstom ties its railroad maintenance contracts not to reduced equipment failures, but to fewer “lost customer hours.” These are just a few examples of servitization—a transition from selling products to selling services.

In a recent article, Servitization for Industrial Products, Ralph Rio at ARC Advisory Group shows how the trend of servitization is now impacting the factory floor itself. As production machinery grows increasingly sophisticated, plant managers find their staff less able to maintain and repair it by themselves. They need more services from vendors. Machine builders and OEMs are providing more training, more extensive maintenance contracts, and better condition monitoring of the equipment they supply. “Services have become an inseparable component of the product,” Rio says.


The benefits are significant. Predictive maintenance offered as a service means reduced stoppages due to equipment failure, and fewer but more efficient service calls when problems do arise. A growing trend is to provide condition monitoring services, which guide operators to run their machinery more effectively, increasing the lifespan of the equipment and improving output and product quality.

To be most effective, condition monitoring needs to run 24/7 in real time, ideally via a connection to the equipment vendor or supplier. Thus, the Industrial IoT is the logical choice for data communication. “To implement servitization, suppliers will need to adopt Industrial IoT for condition monitoring,” Rio predicts.

Two-way street

As we see it, this level of service works best as a two-way street. Data related to the condition of the machine flows to the supplier, while guidance and adjustments coming from the supplier can flow back the plant staff and equipment. This kind of feedback is invaluable for optimizing machine performance. A one-way IoT model that simply collects data for off-line analysis may not be adequate for many use cases. Technically more sophisticated, bidirectional data flow is useful in many condition monitoring scenarios, and thus has always been an option for Skkynet customers.

If the lessons of the past few decades are any indicator, the servitization trend will continue to grow, both among industrialized and emerging nations. And the Industrial IoT will almost certainly play an important role in providing data communications. As long as those communications are robust and secure, we can expect to see more and more IoT-based industrial product servitization, even though that term itself may never become a household word.

Digital Transformation in Wonderware and AVEVA

This one is local.  Although our DataHub software is running in pretty much every industrialized country in the world, and our SkkyHub service connects plants and offices across nations and continents, next week we will be travelling just down the street to participate in the Wonderware Canada East Knowledge Transfer Event, right here in Mississauga, Ontario.

Skkynet, Cogent (a Skkynet subsidiary), and the DataHub products have a long history with Wonderware.  The first large-scale implementation of DataHub technology, which ran for more than 20 years, was at a chocolate manufacturing plant in Toronto.  Initially tasked with providing a fast and reliable connection between Wonderware InTouch running in Windows and QNX-based supervisory control systems, Cogent introduced the real-time middleware architecture that is the functional precursor of DataHub, SkkyHub, ETK and DHTP technology.

Since that time the Wonderware company was acquired by Schneider Electric, and earlier this year there was a merger between Schneider Electric’s industrial software business and the AVEVA Group, one of the world’s largest providers of engineering and industrial software. One of the primary goals of the merger was to “accelerate how capital-intensive industries achieve end-to-end digital transformation.”

In fact, the theme of next week’s Knowledge Transfer Event is “Increase Your Competitive Edge through Digital Transformation.”  Put simply, digital transformation is how the Industrial IoT and related digital technologies are currently changing the industrial landscape.  AVEVA’s position is that “understanding the technology and driving forces behind digital transformation is the key to mastering the digital future of industry.”

As an AVEVA partner, with DataHub products listed on the AVEVA Digital Exchange, Skkynet has been a strong supporter and proponent of digital transformation.  Our participation in this upcoming event is focused on educating Wonderware users, distributors and partners on how Skkynet’s DataHub technology can meet the needs for secure streaming of the industrial data involved in digital transformation.

After more than two decades, Skkynet continues to build a relationship with Wonderware that started in real-time industrial data communication, and is now evolving into digital transformation.  What exactly will that look like?  If you happen to be in Mississauga next week, feel free to stop by the event to meet us and find out.

When Edge Computing Makes Sense

As the concept of cloud computing becomes more familiar to industrial automation engineers and system integrators, the discussion has moved from “Whether I should use it?” to “When should I use it?”  In a recent blog, “Edge or Cloud Analytics?“, Michael Guilfoyle at ARC Advisory Group looks at the business case of cloud computing for industrial applications and compares it to edge computing.  It comes as no surprise that in many instances edge computing makes more sense.

So, what exactly is edge computing?  Generally speaking, it is the processing power of the “things” in the Internet of Things (IoT).  It has become an economically attractive complement for the cloud in IoT, thanks to rapid cost decreases for small-scale processors.  And edge computing has additional benefits for Industrial IoT (IIoT) because it means that data can be processed closer to its source.

Six Factors Favoring Edge Computing

Guilfoyle lists six factors that typically favor edge computing:

  • Connectivity: Some industrial systems are located in environments that make it difficult to maintain the regular connections necessary to sustain cloud computing.
  • Immediacy: For any mission-critical system, the closer you can get to real-time decision-making, the better. Running right on the device itself, an edge-processing system can respond in a few milliseconds, compared to a cloud system which would take at least 100 milliseconds, and often longer.
  • Volume: Industrial systems churn out enormous volumes of data, very little of which is of much interest. Edge computing can monitor the data and filter out what is irrelevant. This reduces bandwidth and frees up cloud-computing resources.
  • Cost: Related to volume, feeding large quantities of raw data to the cloud for processing is not cost effective. It is more economical to at least filter the data, or better still process it locally and send the relevant results to the cloud.
  • Privacy: Company policy or government regulations may prevent connecting process data directly to the cloud.
  • Security: Gateway hardware or software at the edge can be used to help control inbound access to the plant. Skkynet’s DHTP protocol, for example, supports outbound-only connections, keeping all firewall ports closed and eliminating the need for VPNs.
Data Abstraction – A Seventh Factor

In addition to these six factors, we would add another important contribution that edge processing can make towards enhancing the value of cloud computing: data abstraction, the ability to generalize data protocols.  The DHTP protocol, in addition to supporting secure connections, also supports data abstraction.  Skkynet’s edge-processing tools, the ETK and DataHub, can convert data from multiple connected protocols into one universal format consisting of name, value, timestamp and quality.  Using DHTP, data abstracted in this form can be transported with minimal overhead across a TCP connection and converted back into its previous protocol, or other protocols, upon its arrival.

Data abstraction solves one of the problems often associated with the Industrial IoT—the wide range of incompatible protocols.  To get all the IIoT devices talking to each other, they need a common language.  Data abstraction implemented at the edge provides a way for each device to share its data with the cloud, and to receive inputs from other devices.

For all of these reasons—connectivity, immediacy, volume, cost, privacy, security, and data abstraction—edge computing makes a lot of sense for IIoT implementations, as it allows data to be processed close to where it is needed, providing the most value at the least cost.

Pairing OPC UA with a Good IIoT Protocol, a leading online publisher of automation-related content, recently ran an article on the value of pairing OPC UA with a good IIoT protocol like DHTP. The article discusses how OPC UA was initially expected to serve as an IIoT protocol, but more recently the trend seems to be towards using OPC UA at the plant level only. Other protocols, such as MQTT and AMQP are being offered as candidates for connecting outside the plant, but they are not ideally suited to IIoT. This article explains why, and introduces 9 criteria for good IIoT data communication.

What Makes an Ideal Protocol for IIoT?

If you want to ship goods, you need infrastructure.  Trucks, trains, ships, and planes rely on highways, tracks, ports, and airports.  In a similar way, a key element of Industrial IoT (IIoT) is the infrastructure, in other words, a data protocol.  Just as there are many transportation modes to choose from (some better than others), there are a number of IIoT protocols on offer―and they are not all the same.

Since the IIoT is still quite new, it has been an ongoing question as to what makes an ideal IIoT protocol.  With limited experience in this new sphere, many early adopters have looked to existing protocols.  For example, companies are currently using or considering MQTT or AMQP messaging protocols, the REST web services protocol, or the OPC UA industrial protocol.  Each of these works fine in its own application space, and each seems like it could work as an IIoT protocol.  But are any of these really suited to task? Or is there something better out there?

9 Criteria for an Ideal Protocol

To answer that question, we did a comparison.  We distilled over 20 years of hands-on experience in industrial data protocols and TCP networking into 9 criteria for what makes an ideal protocol for IIoT.  The results are summarized in a new white paper, IIoT Protocol Comparison.

These 9 criteria cover all of the essential areas of high-quality industrial data communication, like real-time performance and interoperability.  They also cover the broader arena of the Internet, with its greater security risks, variations in bandwidths and latencies, and multi-node architectures.  The white paper considers specific criteria for each of these in turn, and provides a simple explanation of how each of the protocols does or does not meet them.

If you’ve been following the growth and development of Skkynet over the years, the results of the comparison should come as no surprise.  The only protocol we are aware of that was designed from the ground up to provide secure networking of industrial data both on-premise and over the Internet is DHTP.  DHTP is what our products and services have been using for over 20 years, and it is one of the keys to their success.  We invite you to read the white paper, consider the criteria, and see for yourself what makes an ideal protocol for IIoT.

IIoT Protocol Comparison

What Makes an Ideal IIoT Protocol?

Agood IIoT protocol is the basis for effective IIoT data communication. Without a secure, robust IIoT protocol, data can be late, missing, inconsistent, or dangerously incorrect, leading to costly errors and wasted time.

With the IIoT still in its infancy, companies have turned first to familiar, well-tested data communication and messaging protocols such as MQTT, AMQP, REST and OPC UA for an IIoT protocol. Valid as these may be for their designed purposes, they were never intended to support IIoT data communication. Thus, when evaluated according to criteria for a robust, secure Industrial IoT implementation, they all come up somewhat short.

Skkynet’s software and services are designed for the IIoT, and meet all of the criteria for effective data communication. Here we provide a comparison report on how well MQTT, AMQP, REST, OPC UA, and Skkynet’s own DHTP (DataHub Transfer Protocol) meet the criteria summarized in the above table for an ideal IIoT protocol.  Each of the criteria enumerated above is explained in further detail in subsequent sections.

DHTP Protocol Comparison - Closed Firewalls

Keeps all inbound firewall ports closed for both data sources and data users.

DHTP Protocol Comparison - Closed Firewalls Diagram

Keeping all inbound firewall ports closed at the plant resolves many security issues for Industrial IoT. MQTT, AMQP, REST and DHTP meet this criterion. OPC UA does not because it has a client/server architecture, which requires at least one firewall port be open on the server side (typically the plant) to allow for incoming client connections. This is an unacceptable risk for most industrial systems. Skkynet’s DataHub and ETK connect locally to servers and clients in the plant, and make outbound connections via DHTP to SkkyHub running on a cloud server, or to another DataHub running on a DMZ computer. This outbound connection keeps all inbound firewall ports closed and hides the plant from the outside world.

DHTP Protocol Comparison - Low Bandwith

Consumes minimal bandwidth, while functioning with the lowest possible latency.

DHTP Protocol Comparison - Low Bandwith Diagram

One goal of any industrial communication or IIoT protocol is to consume as little bandwidth as possible, and function with the lowest possible latency. MQTT and AMQP do this well. REST does not, because every transaction includes all of the socket set-up time and communication overhead. OPC-UA is partial, because it uses a smart polling mechanism that trades bandwidth for latency. Skkynet software and services maintain a connection and transmit only the data via DHTP, consuming very little bandwidth, at very low latencies.

DHTP Protocol Comparison - Ability to Scale

Can support hundreds or thousands of interconnected data sources and users.

DHTP Protocol Comparison - Ability to Scale Diagram

An important aspect of the Internet of Things is the vision of connecting hundreds, thousands, and even millions of things via the Internet, and providing access to the data from any single thing, or groups of things to any number of clients. Event-driven protocols like MQTT and AMQP allow for this kind of scaling up, while REST’s polling model prevents it. OPC UA is also event-driven, and so theoretically can scale up, but its underlying polling model does not allow for very large numbers of simultaneous connections. DHTP abstracts the data from the protocol across the connection, and also implements an event-driven model, which allows it to scale up well.

DHTP Protocol Comparison - Real-Time

Adds virtually no latency to the data transmission.

DHTP Protocol Comparison - Real Time Diagram

Any kind of remote HMI or supervisory control system is much more effective when functioning in at least near-real time. Propagation delays of one or more seconds may be tolerable under certain conditions or for certain use cases, but they are not ideal. AMQP and MQTT offer real-time behavior only if they are not operating with a delivery guarantee. That is, if you choose the “guaranteed delivery” quality of service then a slow connection will fall further and further behind real-time. By contrast, DHTP guarantees consistency, not individual packet delivery, and can sustain that guarantee in real time on a slow connection. REST simply has too much connection overhead to allow real-time performance in most circumstances. OPC UA, being an industrial protocol, meets this criterion well.

DHTP Protocol Comparison - Interoperable Data Format

Encodes the data so that clients and servers do not need to know each other’s protocols.

DHTP Protocol Comparison - Interoperable Diagram

A well-defined data format is essential for interoperability, allowing any data source to communicate seamlessly with any data user. Interoperability was the primary driving force behind the original OPC protocols, and is fully supported by the OPC UA data format. Any Industrial IoT software or service should support at least one, if not multiple interoperable data formats. Skkynet’s DataHub software and ETK support several, and allow for real-time interchange between them and DHTP. MQTT, AMQP and REST do not support interoperability between servers and clients because they do not define the data format, only the message envelope format. Thus, one vendor’s MQTT server will most likely not be able to communicate with another vendor’s MQTT client, and the same is true for AMQP and REST.

DHTP Protocol Comparison - Intelligent Overload

A messaging broker responds appropriately when a data user is unable to keep up with the incoming data rate.

DHTP Protocol Comparison - Intelligent Overload Handling Diagram

Overload handling refers to how the broker responds when a client is unable to keep up with the incoming data rate, or when the server is unable to keep up with the incoming data rate from the client. MQTT and AMQP respond in one of two ways. Either they block, effectively becoming inoperative and blocking all clients. Or they drop new data in favor of old data, which leads to inconsistency between client and server. REST saturates its web server and becomes unresponsive. OPC UA attempts to drop old data in favor of new data, but consumes massive amounts of CPU resources to do so. When needed, Skkynet’s DataHub and SkkyHub can drop old data efficiently, and using DHTP they guarantee consistency between client and server even over multiple hops. Data coming from or going to overloaded clients remains consistent, and all other clients are unaffected.

DHTP Protocol Comparison - Propagation of Failure Notification

Each client application knows with certainty if and when a connection anywhere along the data path has been lost, and when it recovers.

DHTP Protocol Comparison - Propagation of Failure Notifications Diagram

Most protocols do not provide failure notification information from within the protocol itself, but rather rely on clients to identify that a socket connection is lost. This mechanism does not propagate when there is more than one hop in the communication chain. Some protocols (such as MQTT) use a “last will and testament” that is application-specific and thus not portable, and which is only good for one connection in the chain. Clients getting data from multiple sources would need to be specifically configured to know which “last will” message is associated with which data source. In MQTT, AMQP, REST and OPC UA alike, the protocol assumes that the client will know how many hops the data is traversing, and that the client will attempt to monitor the health of all hops. That is exceptionally fragile, since knowledge about the data routing must be encoded in the client. In general, this cannot be made reliable. DHTP propagates not only the data itself, but information about the quality of the connection. Each node is fully aware of the quality of the data, and passes that information along to the next node or client.

DHTP Protocol Comparison - Quality of Service

Guarantees consistency of data, preserved through multiple hops.

DHTP Protocol Comparison - Quality of Service Diagram

An important goal of the IIoT is to provide a consistent picture of the industrial data set, whether for archival, monitoring, or supervisory control. MQTT’s ability to guarantee consistency of data is fragile because its Quality of Service options only apply to a single hop in the data chain. And within that single hop, delivery can be guaranteed only at the expense of losing real-time performance. Real-time performance can be preserved, but only by dropping messages and allowing data to become inconsistent between client and server. AMQP’s ability to guarantee consistency of data is fragile because like MQTT it only applies to a single hop in the chain. Additionally, its delivery guarantee blocks when the client cannot keep up with the server and becomes saturated. REST provides no Quality of Service option, and while OPC UA guarantees consistency it cannot work over multiple hops. DHTP guarantees consistency, and the guarantee is preserved through any number of hops.

DHTP Protocol Comparison - Can Daisy Chain?

Brokers can connect to other brokers to support a wide range of collection and distribution architectures.

DHTP Protocol Comparison - Daisy Chain Diagram

The requirements of the IIoT take it beyond the basic client-to-server architecture of traditional industrial applications. To get data out of a plant and into another plant, corporate office, web page or client location, often through a DMZ or cloud server, typically requires two or more servers, chained together. The OPC UA protocol is simply too complex to reproduce in a daisy chain. Information will be lost in the first hop. Attempts to daisy chain some aspects of the OPC UA protocol would result in synchronous multi-hop interactions that would be fragile on all but the most reliable networks, and would result in high latencies. Nor would OPC UA chains provide access to the data at each node in the chain. REST servers could in theory be daisy chained, but would be synchronous, and not provide access to the data at each node in the chain. MQTT and AMQP can be chained, but it requires each node in the chain to be aware that it is part of the chain, and to be individually configured. The QoS guarantees in MQTT and AMQP cannot propagate through the chain, so daisy chaining makes data at the ends unreliable. Skkynet’s DataHub and SkkyHub both support daisy-chained servers because DHTP allows them to mirror the full data set at each node, and provide access to that data both to qualified clients, as well as the next node in the chain. The DHTP QoS guarantee states that any client or intermediate point in the chain will be consistent with the original source, even if some events must be dropped to accommodate limited bandwidth.

In Conclusion

Far from exhaustive, this overview of effective IIoT data communication provides an introduction to the subject, and attempts to highlight some of the key concepts, through sharing what we have found to be essential criteria for evaluating some of the protocols currently on offer. Because none of MQTT, AMQP, REST, or OPC UA were designed specifically for use in Industrial IoT, it is not surprising that they do not fulfill these criteria. DHTP, on the other hand, was created specifically to meet the needs of effective industrial and IIoT data communication, making it an ideal choice for an IIoT protocol.