An enhanced, secure-by-design OPC UA to MQTT gateway can pass data through a DMZ or IT department, keeping all inbound firewall ports on the plant closed.
What Makes an Ideal IIoT Protocol?
Agood IIoT protocol is the basis for effective IIoT data communication. Without a secure, robust IIoT protocol, data can be late, missing, inconsistent, or dangerously incorrect, leading to costly errors and wasted time.
With the IIoT still in its infancy, companies have turned first to familiar, well-tested data communication and messaging protocols such as MQTT, AMQP, REST and OPC UA for an IIoT protocol. Valid as these may be for their designed purposes, they were never intended to support IIoT data communication. Thus, when evaluated according to criteria for a robust, secure Industrial IoT implementation, they all come up somewhat short.
Skkynet’s software and services are designed for the IIoT, and meet all of the criteria for effective data communication. Here we provide a comparison report on how well MQTT, AMQP, REST, OPC UA, and Skkynet’s own DHTP (DataHub Transfer Protocol) meet the criteria summarized in the above table for an ideal IIoT protocol. Each of the criteria enumerated above is explained in further detail in subsequent sections.
Keeps all inbound firewall ports closed for both data sources and data users.
Keeping all inbound firewall ports closed at the plant resolves many security issues for Industrial IoT. MQTT, AMQP, REST and DHTP meet this criterion. OPC UA does not because it has a client/server architecture, which requires at least one firewall port be open on the server side (typically the plant) to allow for incoming client connections. This is an unacceptable risk for most industrial systems. Skkynet’s DataHub and ETK connect locally to servers and clients in the plant, and make outbound connections via DHTP to SkkyHub running on a cloud server, or to another DataHub running on a DMZ computer. This outbound connection keeps all inbound firewall ports closed and hides the plant from the outside world.
Consumes minimal bandwidth, while functioning with the lowest possible latency.
One goal of any industrial communication or IIoT protocol is to consume as little bandwidth as possible, and function with the lowest possible latency. MQTT and AMQP do this well. REST does not, because every transaction includes all of the socket set-up time and communication overhead. OPC-UA is partial, because it uses a smart polling mechanism that trades bandwidth for latency. Skkynet software and services maintain a connection and transmit only the data via DHTP, consuming very little bandwidth, at very low latencies.
Can support hundreds or thousands of interconnected data sources and users.
An important aspect of the Internet of Things is the vision of connecting hundreds, thousands, and even millions of things via the Internet, and providing access to the data from any single thing, or groups of things to any number of clients. Event-driven protocols like MQTT and AMQP allow for this kind of scaling up, while REST’s polling model prevents it. OPC UA is also event-driven, and so theoretically can scale up, but its underlying polling model does not allow for very large numbers of simultaneous connections. DHTP abstracts the data from the protocol across the connection, and also implements an event-driven model, which allows it to scale up well.
Adds virtually no latency to the data transmission.
Any kind of remote HMI or supervisory control system is much more effective when functioning in at least near-real time. Propagation delays of one or more seconds may be tolerable under certain conditions or for certain use cases, but they are not ideal. AMQP and MQTT offer real-time behavior only if they are not operating with a delivery guarantee. That is, if you choose the “guaranteed delivery” quality of service then a slow connection will fall further and further behind real-time. By contrast, DHTP guarantees consistency, not individual packet delivery, and can sustain that guarantee in real time on a slow connection. REST simply has too much connection overhead to allow real-time performance in most circumstances. OPC UA, being an industrial protocol, meets this criterion well.
Encodes the data so that clients and servers do not need to know each other’s protocols.
A well-defined data format is essential for interoperability, allowing any data source to communicate seamlessly with any data user. Interoperability was the primary driving force behind the original OPC protocols, and is fully supported by the OPC UA data format. Any Industrial IoT software or service should support at least one, if not multiple interoperable data formats. Skkynet’s DataHub software and ETK support several, and allow for real-time interchange between them and DHTP. MQTT, AMQP and REST do not support interoperability between servers and clients because they do not define the data format, only the message envelope format. Thus, one vendor’s MQTT server will most likely not be able to communicate with another vendor’s MQTT client, and the same is true for AMQP and REST.
A messaging broker responds appropriately when a data user is unable to keep up with the incoming data rate.
Overload handling refers to how the broker responds when a client is unable to keep up with the incoming data rate, or when the server is unable to keep up with the incoming data rate from the client. MQTT and AMQP respond in one of two ways. Either they block, effectively becoming inoperative and blocking all clients. Or they drop new data in favor of old data, which leads to inconsistency between client and server. REST saturates its web server and becomes unresponsive. OPC UA attempts to drop old data in favor of new data, but consumes massive amounts of CPU resources to do so. When needed, Skkynet’s DataHub and SkkyHub can drop old data efficiently, and using DHTP they guarantee consistency between client and server even over multiple hops. Data coming from or going to overloaded clients remains consistent, and all other clients are unaffected.
Each client application knows with certainty if and when a connection anywhere along the data path has been lost, and when it recovers.
Most protocols do not provide failure notification information from within the protocol itself, but rather rely on clients to identify that a socket connection is lost. This mechanism does not propagate when there is more than one hop in the communication chain. Some protocols (such as MQTT) use a “last will and testament” that is application-specific and thus not portable, and which is only good for one connection in the chain. Clients getting data from multiple sources would need to be specifically configured to know which “last will” message is associated with which data source. In MQTT, AMQP, REST and OPC UA alike, the protocol assumes that the client will know how many hops the data is traversing, and that the client will attempt to monitor the health of all hops. That is exceptionally fragile, since knowledge about the data routing must be encoded in the client. In general, this cannot be made reliable. DHTP propagates not only the data itself, but information about the quality of the connection. Each node is fully aware of the quality of the data, and passes that information along to the next node or client.
Guarantees consistency of data, preserved through multiple hops.
An important goal of the IIoT is to provide a consistent picture of the industrial data set, whether for archival, monitoring, or supervisory control. MQTT’s ability to guarantee consistency of data is fragile because its Quality of Service options only apply to a single hop in the data chain. And within that single hop, delivery can be guaranteed only at the expense of losing real-time performance. Real-time performance can be preserved, but only by dropping messages and allowing data to become inconsistent between client and server. AMQP’s ability to guarantee consistency of data is fragile because like MQTT it only applies to a single hop in the chain. Additionally, its delivery guarantee blocks when the client cannot keep up with the server and becomes saturated. REST provides no Quality of Service option, and while OPC UA guarantees consistency it cannot work over multiple hops. DHTP guarantees consistency, and the guarantee is preserved through any number of hops.
Brokers can connect to other brokers to support a wide range of collection and distribution architectures.
The requirements of the IIoT take it beyond the basic client-to-server architecture of traditional industrial applications. To get data out of a plant and into another plant, corporate office, web page or client location, often through a DMZ or cloud server, typically requires two or more servers, chained together. The OPC UA protocol is simply too complex to reproduce in a daisy chain. Information will be lost in the first hop. Attempts to daisy chain some aspects of the OPC UA protocol would result in synchronous multi-hop interactions that would be fragile on all but the most reliable networks, and would result in high latencies. Nor would OPC UA chains provide access to the data at each node in the chain. REST servers could in theory be daisy chained, but would be synchronous, and not provide access to the data at each node in the chain. MQTT and AMQP can be chained, but it requires each node in the chain to be aware that it is part of the chain, and to be individually configured. The QoS guarantees in MQTT and AMQP cannot propagate through the chain, so daisy chaining makes data at the ends unreliable. Skkynet’s DataHub and SkkyHub both support daisy-chained servers because DHTP allows them to mirror the full data set at each node, and provide access to that data both to qualified clients, as well as the next node in the chain. The DHTP QoS guarantee states that any client or intermediate point in the chain will be consistent with the original source, even if some events must be dropped to accommodate limited bandwidth.
Far from exhaustive, this overview of effective IIoT data communication provides an introduction to the subject, and attempts to highlight some of the key concepts, through sharing what we have found to be essential criteria for evaluating some of the protocols currently on offer. Because none of MQTT, AMQP, REST, or OPC UA were designed specifically for use in Industrial IoT, it is not surprising that they do not fulfill these criteria. DHTP, on the other hand, was created specifically to meet the needs of effective industrial and IIoT data communication, making it an ideal choice for an IIoT protocol.
Note: This article was originally published on the Automation.com website.
OPC UA was designed to be secure in an industrial environment, and it does a good job of it. In the world of Operations Technology (OT) you need reliable and secure data communications to run mission-critical systems. OPC UA provides robust connectivity, allowing your devices and machines to communicate, yet keeping them secure and locked down. But today’s OT world is expanding, being propelled into the larger, corporate world of IT, and beyond that, into the Industrial Internet of Things (IIoT) and Industrie 4.0. When connecting to IT and the IIoT, making OPC UA secure requires a new approach to meet new and different threats to security.
Securing an industrial system requires at the very least securing the perimeter against unauthorized access. Whether or not anything in the plant is connected to IT or the IIoT, this perimeter must remain intact for optimal security. In the past, perimeter protection was often accomplished by air-gapping, where the industrial network was physically isolated from any other network connection. Until recently, this approach or similar solutions like DMZs were sufficient. But these make it difficult if not impossible to share OT data with the company’s own IT department, much less on the IIoT. The challenge is to fully protect the perimeter, and yet still provide access to the data from OPC UA servers inside.
Are VPNs secure enough?
Accessing OPC UA servers or any other industrial system from the IIoT should be done through a secure network connection. The typical approach, one that many take for granted, is to use a VPN (Virtual Private Network). VPN technology is well known, having been used for decades in the IT world. In essence, a VPN provides an outside user with a log-in to the network, and establishes a secure tunnel through the Internet to allow access to the system―the entire system. And that can lead to problems.
While OPC UA can work over a VPN, that doesn’t guarantee robust security. VPNs were not designed for use with industrial process control systems. In fact, they can open vulnerabilities even in the IT world. The attack on Target stores in North America that cost the company millions of dollars was perpetrated through a VPN. Hackers got hold of a user name and password, and gained access to the system. Once in, they quickly found their way to customer records and credit card numbers, and had a field day. The problem with using a VPN to access an industrial system is not only that every VPN user account is a potential access point, but that once someone is inside the perimeter they gain access to the whole system.
The drawbacks of using a VPN for the IoT are examined in detail by Clemens Vasters, a Microsoft Developer. In a paper titled Internet of Things: Is VPN a False Friend? Vasters said, “VPN provides a virtualized and private (isolated) network space. The secure tunnels are a mechanism to achieve an appropriately protected path into that space, but the space per-se is not secured, at all. It is indeed a feature that the established VPN space is fully transparent to all protocol and traffic above the link layer.”
Using Reverse Proxies
Forward-thinking people who are working on the IIoT recognize this inherent risk in using VPNs. Many IT departments now require reverse proxies for OT systems to mask all internal servers and expose just one server to the Internet. But this approach does not secure OPC UA for the IIoT.
OPC UA clients can connect through reverse proxies using HTTP, but not HTTPS due to certificate handling. The proxy will either require opening a new firewall port, or effectively create a path to the control system that could easily be overlooked in the future. Either way an attack surface gets opened in the corporate perimeter. Furthermore, even if the message itself is encrypted, the message headers are exposed to outside observers. The only alternatives involve effectively tunneling through the proxy directly to the control system, which is what the proxy is trying to prevent.
The bottom line is that a reverse proxy is an improvement over a VPN, but it still requires a point of access into the control system from the Internet or IT network. Any point of access is an attack surface, and even if the server code is bulletproof it is still a candidate for a spear-phishing compromise.
Push Instead of Pull
The best way to completely close the plant perimeter is to eliminate all inbound connections, allowing only outbound connections. This is a good idea in principle, as it does not expose the plant to attack. The system presents zero attack surface, becoming invisible to hackers who cannot attack what they cannot see.
However, outbound connections run afoul of traditional design expectations. Effectively they turn the paradigm of industrial data communications on its head. Most client/server architectures, including OPC UA, assume that the server holds the data and the client initiates a connection to interact with it. The server is the authority on the data set, while the client is the non-authoritative user. Thus, in the OPC UA world-view the server must be situated with the primary data source, inside the control system.
To make a push design work in the IIoT, the server/client relationship must be reversed. The client must be the authority (inside the control system), and the server must be a non-authoritative receiver of the data. The client must be able to construct the data set on the server on the fly, based on its knowledge of the control system. This reversal of the client/server roles is something that OPC UA cannot accomplish on its own, but can be added through appropriate gateway software.
Using Forward Proxies
Using a push mechanism allows both OT and IT to completely close the network perimeter. If there is no way to make a connection from outside the network then there is no attack surface to exploit and there is no user to fool into revealing his password.
But even a closed perimeter is not sufficient. Best practice in IT networks is to route outgoing web traffic through a forward proxy, and to deny all other network traffic to the Internet. This substantially improves security by effectively shielding the internal network from a direct Internet connection. To be robust and IT-compliant the outbound IIoT connection must be able to pass through a standard forward proxy. Although OPC UA doesn’t inherently support forward proxies, appropriate gateway software can once again add this capability.
Secure by Design
The Chatham House Report, Cyber Security at Civil Nuclear Facilities Understanding the Risks, points out an alarming lack of security at some of the most critical infrastructure installations in the world, and makes a number of design recommendations. At one point it states, “Many industrial control systems are insecure by design, since cyber security measures were not designed in from the beginning.” And this does not just apply to nuclear facilities. Indeed, the “many industrial systems” may well include those which now or soon might incorporate OPC UA. And they require a new approach, a new design for security on the IIoT.
The new design approach must allow OPC UA clients from any location to connect and acquire data from OPC UA servers within the plant perimeter, to eliminate the need for reverse proxies and VPNs and to avoid opening any inbound firewall ports. At the same time, to fully support OPC UA’s real-time data access, the design must support bi-directional data communication between OT and IT systems and across the Internet at speeds very close to network propagation times. Secure-by-design for the IIoT should take a no-compromise approach, offering the best possible combination of speed, security, and convenience.
With this level of security, and near-real-time speeds, there is one more design consideration: practicality. To gain traction among users, the design should be convenient to implement. It should, for example, allow for seamless integration with legacy installations using OPC Classic and other industrial protocols, as well as newer OPC UA-enabled systems. It should provide a loose coupling to the IIoT, one that allows remote, authorized and secure access the data, optionally including supervisory control, but that has no impact on the primary control system if it gets disconnected. And it should be easy enough to implement that it doesn’t overly tax the time or resources of the system integrator or plant engineer who is implementing it.
This is the kind of design that is needed to secure the IIoT, and make it compatible with today’s factory or process. OPC UA is the industrial protocol of the present, and of the future. It has the ability to integrate plant data from virtually any machine or device, large or small, as well as to bring the disparate worlds of OT and IT together. When OPC UA is wedded to the appropriate, secure-by-design IoT technology, it will play a key role in Industrie 4.0 and IIoT applications.
It’s no secret down on the shop floor, or in the upper echelons of management, that IT and OT don’t always see eye to eye. For decades, the business computing world of Information Technology (IT) has been growing and evolving separately from the Operational Technology (OT) world. Plant engineers and system integrators working in the OT sphere are happy to keep their distance from the requirements and constraints of the IT department, going so far in many cases as to function on completely separate physical networks. Most executives, for their part, are reasonably satisfied to let the OT people do their work, and simply receive regular production reports from an ERP or possibly a MES system.
There are good reasons why these two siblings of IT and OT have grown up separately, despite their common parentage in computing technology. Yet now, increasing demands within and outside the enterprise are starting to force them to cooperate, and possibly even live under one roof. Exactly when and how this will happen may vary depending on the company and other factors, but it’s a trend that analysts such as Gartner and ARC Advisory Group predict will increase significantly in the next few years.
Much of this anticipated overlap (or collision) of IT and OT is due to advances in technology. On the OT side, Industry 4.0 and the Industrial IoT have become viable as the Internet becomes more reliable, and the cost of connecting devices drops exponentially. In the IT world, the lure and promise of Big Data and the analytical tools needed to extract value from it are moving quickly from the status of luxury to necessity. Heeding the lessons learned from the demise of Kodak and Blockbuster, executives understand the need to stay competitive in the digital age, or suffer the consequences.
Two Worlds of IT and OT
It is no accident that IT and OT seem to occupy two different worlds. You can trace this back to the primary goal of each. The focus of IT people is business improvement—to support accounting, logistics, human resources, and all other areas of the business to make it more effective and productive. In a sense, for IT, the product is the business itself. Upgrades to computer systems and improvements in skills pay off with immediate results in the success of the business. And it’s easy to make improvements because critical data is relatively static, providing ample opportunities to upgrade the tools and skills needed to manipulate the data.
In the OT world, the focus is on doing or making things. The production process is paramount. Complex factory systems, pipelines, power grids, and chemical plants cannot be switched on and off easily. Many systems run 24/7, and cannot be put on pause for software upgrades. Every hour of lost production time can cost millions. It may take months or years to build such a system, and once it is running, few engineers are willing to risk swapping in a piece of untested software. Computer skills are just one aspect of a project where the bulk of the expenditure and expertise is focused on the machinery and devices needed to do the work. OT is one of several players in the game, and not the star of the show that IT often becomes in its world.
Be that as it may, these two worlds are now poised to make contact. Businesses are waking up to the value of the data that’s coming from the production systems. Managers are discovering within OT data opportunities to harness real-time analytics and leverage predictive technologies that IT can provide. In a recent article, The Internet of Things: Bridging the OT/IT divide, John Pepper, CEO and Founder of Managed 24/7, said, “Unless organisations actively bridge the gap between OT and IT, the real operational benefits of the digital business will be lost.”
Bridging the Gap
As we understand it, there are at least three approaches to bridging the gap between IT and OT:
- Insert IT into OT. You can either import IT staff and expertise into the OT world, or build it in from the ground up. So far, this has not been a popular approach.
- Absorb OT into IT. Essentially this means expanding the IT world to encompass OT. Again, it may sound interesting in theory, but apparently the differences are too great, because we don’t see this happening much in practice.
- Allow OT and IT to communicate. For now, data communication seems to be the favored approach. Time will tell if this becomes a permanent necessity, or whether the two worlds can eventually merge.
For the foreseeable future, any convergence of IT and OT will continue to take place through data communication. What form does and will this communication take? Clearly OPC plays and will continue to play a major role. The key to OPC’s success to date has been its ability to foster communication between disparate systems. The large installed base of OPC Classic provides an easy way to obtain data from a wide range of systems. OPC UA is positioned as the data protocol for Industry 4.0 and the Industrial IoT. Whatever protocol may be used, and whatever form it takes, successful data communication between IT and OT must provide security, integration, and real-time performance.
Security is a major concern for OT professionals when considering connections to IT systems. For decades OT has usually been either physically separated from corporate IT networks, and/or functioning under the “security through obscurity” principle. The increasing number and sophistication of hacks to online industrial plants and power systems, along with the ability of viruses like Stuxnet to contaminate even an isolated system, underscore the need for an active and educated approach to security.
With this in mind, the best way to convince a prudent OT manager to share data with IT is to ensure the most secure connectivity scenario that is realistically achievable. The data communication protocol, such as OPC UA, should provide robust connectivity over TCP, and implement SSL and certificates. Keeping the plant’s firewalls closed and utilizing DMZs and proxy servers are essential for eliminating potential points of entry. Ideally, the system should be secure by design, and not need to rely on VPNs or additional security hardware. In fact, there is no need for IT to have any access to the plant at all, just the data. And access to that data should be restricted to just those in IT or management authorized to use it.
Seamless integration of data protocols is a second requirement for IT / OT convergence. OPC provides a way for the vast array of industrial protocols to be integrated into a single protocol. Converting OPC Classic to OPC UA will be needed to include legacy equipment in the conversation. To fit into the IT world of SQL databases, the ability to convert to ODBC is a must. And let’s not forget the IT world’s personal tool of choice: Excel. These are some of the more popular data protocols as a starting point; there may be others. The better the integration of OT data into familiar tools for IT, the more likely the IT and OT worlds will get along.
Finally, real-time performance is a big plus, if not an absolute necessity. Real-time data coming directly from the factory floor is one of the primary reasons for the whole project. This is the data that will power the real-time analytical engines and predictive technologies that management envisions, and that IT will be implementing.
Will we ever see IT and OT converge? It is difficult to say at this early stage. The trend right now is to open channels of data communication between the two. Success in these initial endeavors may inspire players on one side or the other to expand beyond their limited domains, and work towards a more fundamental level of integration. For now, professionals in both OT and IT can start by implementing secure, integrated, real-time data communication, and see where that leads.
A mong all the fanfare and hoopla over the Industrial IoT, the more practical-minded among us quietly but persistently raise the question, “So, where’s the value?” It’s a fair question. The IoT represents a new area of influence for industrial automation. Before embarking on such a venture, it’s good to have some idea what the benefits may be.
As we see it, there are two main parties involved, producers and suppliers, and each of them stands to benefit in their own way:
By “producers” we mean any company in the industrial sector that produces goods or services, such as manufacturing, energy, oil & gas, chemicals, mining, water & wastewater, transportation, food & beverages, and so on.
OPEX over CAPEX
Traditionally, projects in the industrial sector require large up front capital expenses (CAPEX) and are usually accompanied by long-term commitments. Shifting these costs to operational expenses (OPEX) means that you do not need to justify a large capital expenditure over years of returns. Just like a cup of coffee, you buy it, consume it and when you need more, you buy it again.
The SkkyHub “pay as you go” model cuts costs in this way. There are no long-term commitments and no initial capital investments. Costs are reduced and shifted from high capital expenses to monthly operating expenses, which improves long-term expense planning and looks better on financial statements.
Data as a Service
There is no need for additional IT personnel or extra hardware, no programming and no upgrade headaches. SkkyHub takes care of data connectivity, freeing up customer staff and resources for higher priority tasks, while increasing ROI.
The Efficiency of Big Data
Knowing exactly what is happening at any given time in the system is a useful step that a producer can take towards improving efficiency, enhancing value. Until recently, this kind of analysis was only available to the biggest enterprises. Now SkkyHub provides a cost-effective way to bring the power of big-data collection to even the smallest enterprise. Combined with custom or third-party analytical tools, the real-time data flowing through SkkyHub can power both historical and real-time analysis, boosting KPIs and enabling significant gains in productivity.
Overall Equipment Effectiveness (OEE)
Overall equipment effectiveness (OEE) is a measure of how efficiently production equipment is being used. In manufacturing, for example, OEE is calculated according to three measures: uptime of production equipment, quantity of output, and quality of finished products. Manual methods and historical data archives give a rough idea of OEE, but according to a recent paper published by the ISA, a much more precise and relevant picture can be drawn by combining real-time operational visibility with real-time analytics. Any drop in production uptime or quantity, or in the quality of finished goods will be noticed immediately, and a fix can be applied on the spot, rather than waiting days, weeks, or months for a report to be generated.
Today’s engineers and managers recognize the need to shift from reactive to predictive maintenance. Instead of asking “What happened?” or “What’s happening?” they want to be asking “What will happen?” Instead of just putting out fires, management and production staff can use the real-time data provided by SkkyHub for optimization, data mining, and fault prediction.
By “suppliers” we mean companies that supply goods or services to industrial companies, in three broad categories:
- Raw Materials Suppliers
- OEMs (Original Equipment Manufacturers) and Equipment Vendors
- System Integrators
Raw Materials Suppliers
Connecting to a customer’s process data via the Industrial IoT provides value by giving suppliers a window into the real-time consumption rates of the raw materials they provide. This allows them to offer just-in-time deliveries, and coordinate their production with demand in real time. A well-known business model shows how the lack of communication between suppliers and producers can cause costly shortages and wasteful overruns. If the Industrial IoT is extended further to include customer order data, then the supply-production-delivery chain could be fully coordinated, with minimal waste and maximum profit.
OEMs and Equipment Vendors
Implemented properly, the Industrial IoT provides a way for OEMs and equipment vendors to monitor their tools and machines in real time. As industrial equipment grows increasingly complex, more and more specialized knowledge is required to maintain and keep it running at optimal efficiency. Meanwhile, customers constantly demand higher uptime rates.
The solution is to stay connected 24/7 in real time. This kind of connection provides vendors and manufacturers immediate notification when something goes wrong, and a convenient channel to check settings and tweak configuration. Rather than sending a technician out to the plant, the tech support team can address the problem using the full set of in-house resources. For the big picture over time, with every machine connected, the vendor or manufacturer can collect histories for every unit in the field, and analyze the data over the entire life of the product.
Given the benefits of OPEX over CAPEX, the growing complexity of machinery, and the convenience of remote monitoring and service, the Industrial IoT may well facilitate a trend towards providing equipment as a service. Plant owners pay a monthly leasing fee for the equipment, and tool manufacturers and/or vendors ensure that it is in place and functioning as expected.
System integration companies come in all sizes, from lone entrepreneurial engineers to mid-sized specialty shops to multi-national giants. Each may offer a different range of skills, products, and services. As the Industrial IoT gains traction, system integrators may begin looking for a way to offer such a service that works well.
Skkynet offers revenue sharing opportunities that meet the needs of any size system integrator working with customers in any sector or niche market. Skkynet partners are able to offer their customers a secure end-to-end solution for the Industrial IoT right now―at a fraction of the cost associated with ad-hoc or home-grown solutions. System integrators who can offer value through best of breed technology to enhance customer performance will deepen relationships with existing clients and grow their customer base.
“I t all sounds fine on paper, but will it work for me?” That’s a question that engineers and system integrators often ask when the topic of Industrial IoT comes up. There are so many ways it has to fit. Industrial systems are like snowflakes–every one is unique. Each facility, factory, pipeline, or power plant was built for a particular purpose, in a different part of the world, at a specific time in plant automation history, when technology had advanced to a certain level. We see a wide range of machines, tools, sensors, and other equipment used with endless combinations of proprietary and home-grown software and data protocols. Over time, plant modifications and expansions along with hardware and software upgrades bring still more variety.
If this diversity isn’t challenge enough, new questions are now popping up about the Industrial IoT itself: How to get started? What service provider to use? What approach or platform is best to take? What are the cost benefits?
Putting all this together, it becomes clear that a good Industrial IoT solution should be a comfortable fit. It should connect to virtually any in-plant system with a minimum of fuss, and provide links to remote systems as well. It should be compatible with multiple data protocols and legacy systems, and yet also integrate seamlessly with future hardware and software. Like putting on a new suit, the ideal is to ease into the IoT without disrupting anything.
Working towards that goal, here’s what a good system should do:
- Support diverse data communication protocols: OPC, both OPC “Classic” and UA, plays an important role in simplifying and unifying industrial data communications. Any Industrial IoT platform should support OPC, along with common industrial fieldbuses like Modbus, Profibus, HART, DeviceNet, and so on. It should also support more specialized standards like IEC 61850, CAN, ZigBee, and BACnet. In addition to these, Industrial IoT should be compatible with non-industrial standards like HTML and XML for web connectivity, ODBC for database connectivity, DDE for connecting to Excel if needed, as well as the ability to connect to custom programs.
- Connect to embedded devices: The “of Things” part of the Internet of Things refers primarily to embedded devices. Sensors, actuators, and other devices are getting smaller, cheaper, and more versatile every day. They should be able to connect–either directly or via a wired or cellular gateway–to the cloud. This is an area where SCADA can provide a wealth of experience to the Industrial IoT, and in turn benefit significantly from the expanded reach that Internet connectivity can provide.
- Work with new or legacy equipment and facilities: Since the introduction of the DCS and PLC in the 1970’s, digital automation has been growing and evolving. While new technologies are constantly being adopted or adapted, many older systems continue to run. With so much engineering, effort, and capital invested in each project, plant management is often reluctant to make changes to a working system. To be accepted in the “If it ain’t broke, don’t fix it” world, an Industrial IoT system should be able to connect to, but not intrude upon, legacy systems. Of course, for new systems, it should do likewise.
- Use existing tools, or better: The Industrial IoT doesn’t need to reinvent the wheel. Most industrial automation systems have a solid, working set of tools, which might include DCS and SCADA systems; HMIs; MES, ERP and other kinds of databases; data historians, and more. A compatible Industrial IoT implementation should work as seamlessly as possible with all of these tools, using the appropriate protocols. At the same time, it would do well to offer connections to improved tools, if required or desired.
- Meet Big Data requirements: Among the new tools, the ability to connect existing or future industrial systems with Big Data is one of the main attractions of the Industrial IoT. A compatible Industrial IoT solution should provide connectivity and the performance necessary to feed whatever Big Data engine may be chosen.
- Allow for gradual implementation: Automation experts and proponents of the Industrial IoT are quick to point out that there is no need to implement this all at once. They often recommend a gradual, step-by-step implementation process. Start with a small data set, an isolated process or system, and build from there. Bring in users as needed. Once you are comfortable with the tools and techniques, you can build out. Naturally, you’ll need an IoT platform that supports this approach.
How Skkynet Fits
With Skkynet, compatibility for the Industrial IoT comes in three components that work seamlessly together: DataHub®, Embedded Toolkit (ETK), and SkkyHub™.
The Cogent DataHub® connects directly to in-plant systems via OPC, Modbus, ODBC and DDE, and is fully integrated with the Red Lion Data Station Plus, to connect to 300 additional industrial protocols. The DataHub supports data aggregation, server-to-server bridging, database logging, redundancy, and other data integration functionality. It also offers WebView, a flexible, web-based HMI.
The Embedded Toolkit (ETK) is a C library that provides the building blocks for embedded systems to connect and communicate with SkkyHub or the DataHub. It has been compiled to run on gateways from Red Lion, B+B SmartWorx, NetComm, and SysLINK, as well as devices from Renesas, Lantronix, Raspberry Pi, Arduino, ARM, and more.
These two components can be connected to and integrated with virtually any industrial system. They can be used separately or together, and can serve as the first stage of evolution towards the cloud at any time, by connecting to SkkyHub.
The SkkyHub™ service collects and distributes real-time data over networks, both locally and remotely. Connecting to the DataHub or any ETK-enabled device, SkkyHub provides secure networking of Industrial IoT data between remote locations, and remote monitoring and supervisory control through WebView.
Skkynet’s Industrial IoT software and services are in wide use today. You can find them connecting manufacturing facilities, wind and solar farms, offshore platforms, mines, pipelines, production lines, gauges, pumps, valves, actuators, and sensors. Their unique combination of security, speed, and compatibility with virtually any industrial system makes the DataHub, ETK, and SkkyHub well-fitting components of the Industrial IoT.