Will IT and OT Converge?

It’s no secret down on the shop floor, or in the upper echelons of management, that IT and OT don’t always see eye to eye. For decades, the business computing world of Information Technology (IT) has been growing and evolving separately from the Operational Technology (OT) world. Plant engineers and system integrators working in the OT sphere are happy to keep their distance from the requirements and constraints of the IT department, going so far in many cases as to function on completely separate physical networks. Most executives, for their part, are reasonably satisfied to let the OT people do their work, and simply receive regular production reports from an ERP or possibly a MES system.

There are good reasons why these two siblings of IT and OT have grown up separately, despite their common parentage in computing technology. Yet now, increasing demands within and outside the enterprise are starting to force them to cooperate, and possibly even live under one roof. Exactly when and how this will happen may vary depending on the company and other factors, but it’s a trend that analysts such as Gartner and ARC Advisory Group predict will increase significantly in the next few years.

Much of this anticipated overlap (or collision) of IT and OT is due to advances in technology. On the OT side, Industry 4.0 and the Industrial IoT have become viable as the Internet becomes more reliable, and the cost of connecting devices drops exponentially. In the IT world, the lure and promise of Big Data and the analytical tools needed to extract value from it are moving quickly from the status of luxury to necessity. Heeding the lessons learned from the demise of Kodak and Blockbuster, executives understand the need to stay competitive in the digital age, or suffer the consequences.

Two Worlds of IT and OT

It is no accident that IT and OT seem to occupy two different worlds. You can trace this back to the primary goal of each. The focus of IT people is business improvement—to support accounting, logistics, human resources, and all other areas of the business to make it more effective and productive. In a sense, for IT, the product is the business itself. Upgrades to computer systems and improvements in skills pay off with immediate results in the success of the business. And it’s easy to make improvements because critical data is relatively static, providing ample opportunities to upgrade the tools and skills needed to manipulate the data.

In the OT world, the focus is on doing or making things. The production process is paramount. Complex factory systems, pipelines, power grids, and chemical plants cannot be switched on and off easily. Many systems run 24/7, and cannot be put on pause for software upgrades. Every hour of lost production time can cost millions. It may take months or years to build such a system, and once it is running, few engineers are willing to risk swapping in a piece of untested software. Computer skills are just one aspect of a project where the bulk of the expenditure and expertise is focused on the machinery and devices needed to do the work. OT is one of several players in the game, and not the star of the show that IT often becomes in its world.

Be that as it may, these two worlds are now poised to make contact. Businesses are waking up to the value of the data that’s coming from the production systems. Managers are discovering within OT data opportunities to harness real-time analytics and leverage predictive technologies that IT can provide. In a recent article, The Internet of Things: Bridging the OT/IT divide, John Pepper, CEO and Founder of Managed 24/7, said, “Unless organisations actively bridge the gap between OT and IT, the real operational benefits of the digital business will be lost.”

Bridging the Gap

As we understand it, there are at least three approaches to bridging the gap between IT and OT:

  1. Insert IT into OT. You can either import IT staff and expertise into the OT world, or build it in from the ground up. So far, this has not been a popular approach.
  2. Absorb OT into IT. Essentially this means expanding the IT world to encompass OT. Again, it may sound interesting in theory, but apparently the differences are too great, because we don’t see this happening much in practice.
  3. Allow OT and IT to communicate. For now, data communication seems to be the favored approach. Time will tell if this becomes a permanent necessity, or whether the two worlds can eventually merge.

For the foreseeable future, any convergence of IT and OT will continue to take place through data communication. What form does and will this communication take? Clearly OPC plays and will continue to play a major role. The key to OPC’s success to date has been its ability to foster communication between disparate systems. The large installed base of OPC Classic provides an easy way to obtain data from a wide range of systems. OPC UA is positioned as the data protocol for Industry 4.0 and the Industrial IoT. Whatever protocol may be used, and whatever form it takes, successful data communication between IT and OT must provide security, integration, and real-time performance.

Security is a major concern for OT professionals when considering connections to IT systems. For decades OT has usually been either physically separated from corporate IT networks, and/or functioning under the “security through obscurity” principle. The increasing number and sophistication of hacks to online industrial plants and power systems, along with the ability of viruses like Stuxnet to contaminate even an isolated system, underscore the need for an active and educated approach to security.

With this in mind, the best way to convince a prudent OT manager to share data with IT is to ensure the most secure connectivity scenario that is realistically achievable. The data communication protocol, such as OPC UA, should provide robust connectivity over TCP, and implement SSL and certificates. Keeping the plant’s firewalls closed and utilizing DMZs and proxy servers are essential for eliminating potential points of entry. Ideally, the system should be secure by design, and not need to rely on VPNs or additional security hardware. In fact, there is no need for IT to have any access to the plant at all, just the data. And access to that data should be restricted to just those in IT or management authorized to use it.

Seamless integration of data protocols is a second requirement for IT / OT convergence. OPC provides a way for the vast array of industrial protocols to be integrated into a single protocol. Converting OPC Classic to OPC UA will be needed to include legacy equipment in the conversation. To fit into the IT world of SQL databases, the ability to convert to ODBC is a must. And let’s not forget the IT world’s personal tool of choice: Excel. These are some of the more popular data protocols as a starting point; there may be others. The better the integration of OT data into familiar tools for IT, the more likely the IT and OT worlds will get along.

Finally, real-time performance is a big plus, if not an absolute necessity. Real-time data coming directly from the factory floor is one of the primary reasons for the whole project. This is the data that will power the real-time analytical engines and predictive technologies that management envisions, and that IT will be implementing.

Will we ever see IT and OT converge? It is difficult to say at this early stage. The trend right now is to open channels of data communication between the two. Success in these initial endeavors may inspire players on one side or the other to expand beyond their limited domains, and work towards a more fundamental level of integration. For now, professionals in both OT and IT can start by implementing secure, integrated, real-time data communication, and see where that leads.

Value Propositions for Industrial IoT

A mong all the fanfare and hoopla over the Industrial IoT, the more practical-minded among us quietly but persistently raise the question, “So, where’s the value?” It’s a fair question. The IoT represents a new area of influence for industrial automation. Before embarking on such a venture, it’s good to have some idea what the benefits may be.

As we see it, there are two main parties involved, producers and suppliers, and each of them stands to benefit in their own way:

Producers

By “producers” we mean any company in the industrial sector that produces goods or services, such as manufacturing, energy, oil & gas, chemicals, mining, water & wastewater, transportation, food & beverages, and so on.

OPEX over CAPEX

Traditionally, projects in the industrial sector require large up front capital expenses (CAPEX) and are usually accompanied by long-term commitments. Shifting these costs to operational expenses (OPEX) means that you do not need to justify a large capital expenditure over years of returns. Just like a cup of coffee, you buy it, consume it and when you need more, you buy it again.

The SkkyHub “pay as you go” model cuts costs in this way. There are no long-term commitments and no initial capital investments. Costs are reduced and shifted from high capital expenses to monthly operating expenses, which improves long-term expense planning and looks better on financial statements.

Data as a Service

There is no need for additional IT personnel or extra hardware, no programming and no upgrade headaches. SkkyHub takes care of data connectivity, freeing up customer staff and resources for higher priority tasks, while increasing ROI.

The Efficiency of Big Data

Knowing exactly what is happening at any given time in the system is a useful step that a producer can take towards improving efficiency, enhancing value. Until recently, this kind of analysis was only available to the biggest enterprises. Now SkkyHub provides a cost-effective way to bring the power of big-data collection to even the smallest enterprise. Combined with custom or third-party analytical tools, the real-time data flowing through SkkyHub can power both historical and real-time analysis, boosting KPIs and enabling significant gains in productivity.

Overall Equipment Effectiveness (OEE)

Overall equipment effectiveness (OEE) is a measure of how efficiently production equipment is being used. In manufacturing, for example, OEE is calculated according to three measures: uptime of production equipment, quantity of output, and quality of finished products. Manual methods and historical data archives give a rough idea of OEE, but according to a recent paper published by the ISA, a much more precise and relevant picture can be drawn by combining real-time operational visibility with real-time analytics. Any drop in production uptime or quantity, or in the quality of finished goods will be noticed immediately, and a fix can be applied on the spot, rather than waiting days, weeks, or months for a report to be generated.

Predictive Maintenance

Today’s engineers and managers recognize the need to shift from reactive to predictive maintenance. Instead of asking “What happened?” or “What’s happening?” they want to be asking “What will happen?” Instead of just putting out fires, management and production staff can use the real-time data provided by SkkyHub for optimization, data mining, and fault prediction.

Suppliers

By “suppliers” we mean companies that supply goods or services to industrial companies, in three broad categories:

  1. Raw Materials Suppliers
  2. OEMs (Original Equipment Manufacturers) and Equipment Vendors
  3. System Integrators
Raw Materials Suppliers

Connecting to a customer’s process data via the Industrial IoT provides value by giving suppliers a window into the real-time consumption rates of the raw materials they provide. This allows them to offer just-in-time deliveries, and coordinate their production with demand in real time. A well-known business model shows how the lack of communication between suppliers and producers can cause costly shortages and wasteful overruns. If the Industrial IoT is extended further to include customer order data, then the supply-production-delivery chain could be fully coordinated, with minimal waste and maximum profit.

OEMs and Equipment Vendors

Implemented properly, the Industrial IoT provides a way for OEMs and equipment vendors to monitor their tools and machines in real time. As industrial equipment grows increasingly complex, more and more specialized knowledge is required to maintain and keep it running at optimal efficiency. Meanwhile, customers constantly demand higher uptime rates.

The solution is to stay connected 24/7 in real time. This kind of connection provides vendors and manufacturers immediate notification when something goes wrong, and a convenient channel to check settings and tweak configuration. Rather than sending a technician out to the plant, the tech support team can address the problem using the full set of in-house resources. For the big picture over time, with every machine connected, the vendor or manufacturer can collect histories for every unit in the field, and analyze the data over the entire life of the product.

Given the benefits of OPEX over CAPEX, the growing complexity of machinery, and the convenience of remote monitoring and service, the Industrial IoT may well facilitate a trend towards providing equipment as a service. Plant owners pay a monthly leasing fee for the equipment, and tool manufacturers and/or vendors ensure that it is in place and functioning as expected.

System Integrators

System integration companies come in all sizes, from lone entrepreneurial engineers to mid-sized specialty shops to multi-national giants. Each may offer a different range of skills, products, and services. As the Industrial IoT gains traction, system integrators may begin looking for a way to offer such a service that works well.

Skkynet offers revenue sharing opportunities that meet the needs of any size system integrator working with customers in any sector or niche market. Skkynet partners are able to offer their customers a secure end-to-end solution for the Industrial IoT right now―at a fraction of the cost associated with ad-hoc or home-grown solutions. System integrators who can offer value through best of breed technology to enhance customer performance will deepen relationships with existing clients and grow their customer base.

Fitting In with Industrial IoT

“I t all sounds fine on paper, but will it work for me?” That’s a question that engineers and system integrators often ask when the topic of Industrial IoT comes up. There are so many ways it has to fit. Industrial systems are like snowflakes–every one is unique. Each facility, factory, pipeline, or power plant was built for a particular purpose, in a different part of the world, at a specific time in plant automation history, when technology had advanced to a certain level. We see a wide range of machines, tools, sensors, and other equipment used with endless combinations of proprietary and home-grown software and data protocols. Over time, plant modifications and expansions along with hardware and software upgrades bring still more variety.

If this diversity isn’t challenge enough, new questions are now popping up about the Industrial IoT itself: How to get started? What service provider to use? What approach or platform is best to take? What are the cost benefits?

Putting all this together, it becomes clear that a good Industrial IoT solution should be a comfortable fit. It should connect to virtually any in-plant system with a minimum of fuss, and provide links to remote systems as well. It should be compatible with multiple data protocols and legacy systems, and yet also integrate seamlessly with future hardware and software. Like putting on a new suit, the ideal is to ease into the IoT without disrupting anything.

Working towards that goal, here’s what a good system should do:

  • Support diverse data communication protocols: OPC, both OPC “Classic” and UA, plays an important role in simplifying and unifying industrial data communications. Any Industrial IoT platform should support OPC, along with common industrial fieldbuses like Modbus, Profibus, HART, DeviceNet, and so on. It should also support more specialized standards like IEC 61850, CAN, ZigBee, and BACnet. In addition to these, Industrial IoT should be compatible with non-industrial standards like HTML and XML for web connectivity, ODBC for database connectivity, DDE for connecting to Excel if needed, as well as the ability to connect to custom programs.
  • Connect to embedded devices: The “of Things” part of the Internet of Things refers primarily to embedded devices. Sensors, actuators, and other devices are getting smaller, cheaper, and more versatile every day. They should be able to connect–either directly or via a wired or cellular gateway–to the cloud. This is an area where SCADA can provide a wealth of experience to the Industrial IoT, and in turn benefit significantly from the expanded reach that Internet connectivity can provide.
  • Work with new or legacy equipment and facilities: Since the introduction of the DCS and PLC in the 1970’s, digital automation has been growing and evolving. While new technologies are constantly being adopted or adapted, many older systems continue to run. With so much engineering, effort, and capital invested in each project, plant management is often reluctant to make changes to a working system. To be accepted in the “If it ain’t broke, don’t fix it” world, an Industrial IoT system should be able to connect to, but not intrude upon, legacy systems. Of course, for new systems, it should do likewise.
  • Use existing tools, or better: The Industrial IoT doesn’t need to reinvent the wheel. Most industrial automation systems have a solid, working set of tools, which might include DCS and SCADA systems; HMIs; MES, ERP and other kinds of databases; data historians, and more. A compatible Industrial IoT implementation should work as seamlessly as possible with all of these tools, using the appropriate protocols. At the same time, it would do well to offer connections to improved tools, if required or desired.
  • Meet Big Data requirements: Among the new tools, the ability to connect existing or future industrial systems with Big Data is one of the main attractions of the Industrial IoT. A compatible Industrial IoT solution should provide connectivity and the performance necessary to feed whatever Big Data engine may be chosen.
  • Allow for gradual implementation: Automation experts and proponents of the Industrial IoT are quick to point out that there is no need to implement this all at once. They often recommend a gradual, step-by-step implementation process. Start with a small data set, an isolated process or system, and build from there. Bring in users as needed. Once you are comfortable with the tools and techniques, you can build out. Naturally, you’ll need an IoT platform that supports this approach.

How Skkynet Fits

With Skkynet, compatibility for the Industrial IoT comes in three components that work seamlessly together: DataHub®, Embedded Toolkit (ETK), and SkkyHub™.

The Cogent DataHub® connects directly to in-plant systems via OPC, Modbus, ODBC and DDE, and is fully integrated with the Red Lion Data Station Plus, to connect to 300 additional industrial protocols. The DataHub supports data aggregation, server-to-server bridging, database logging, redundancy, and other data integration functionality. It also offers WebView, a flexible, web-based HMI.

The Embedded Toolkit (ETK) is a C library that provides the building blocks for embedded systems to connect and communicate with SkkyHub or the DataHub. It has been compiled to run on gateways from Red Lion, B+B SmartWorx, NetComm, and SysLINK, as well as devices from Renesas, Lantronix, Raspberry Pi, Arduino, ARM, and more.

These two components can be connected to and integrated with virtually any industrial system. They can be used separately or together, and can serve as the first stage of evolution towards the cloud at any time, by connecting to SkkyHub.

The SkkyHub™ service collects and distributes real-time data over networks, both locally and remotely. Connecting to the DataHub or any ETK-enabled device, SkkyHub provides secure networking of Industrial IoT data between remote locations, and remote monitoring and supervisory control through WebView.

Skkynet’s Industrial IoT software and services are in wide use today. You can find them connecting manufacturing facilities, wind and solar farms, offshore platforms, mines, pipelines, production lines, gauges, pumps, valves, actuators, and sensors. Their unique combination of security, speed, and compatibility with virtually any industrial system makes the DataHub, ETK, and SkkyHub well-fitting components of the Industrial IoT.

Top Performance for Industrial IoT

T he Industrial IoT is different from the regular IoT. Mission-critical industrial systems are not like consumer or business IT applications. Performance is crucial. Most IT systems are built around a relational database, a repository of data that clients can add to or access, where a response time of a second or two is acceptable. IT data is typically sent across a network via HTML or XML, which adds complexity to the raw data, and consumes bandwidth. Although fine for office or home use, these technologies are not sufficient for the Industrial IoT.

In a typical industrial system, the data flows in real time. It moves from a sensor, device, or process through the system, often combining with other data along the way, and may end up in an operator’s control panel, another machine or device, or special-purpose data historian. As plant or field conditions change, the data arrives in real time, and the system or operator must react. A robotic arm or other device can send hundreds of data changes per second. Tiny, millisecond fluctuations in the data set can have significant effects or trigger alarms, and often each minute detail needs to be accessed in a trend chart or historical database.

Achieving this kind of performance on the Industrial IoT demands an exceptional approach to data communication.

  • A real-time, in-memory database keeps the data moving. The data needs to flow quickly and effortlessly through the system, and an in-memory database is needed to support these rapid value changes. A relational database, the familiar workhorse of the IT world, is not built for this specialized task. It takes too long to write records, process queries, and retrieve information. Thus, an in-memory, flat-file database, is a good choice, allowing for higher data throughput.
  • High-speed data integration connects any data source with any user. A key task of the in-memory database is to integrate all sources of incoming data. If all communication is data-centric (see below), then every data source can be pooled together into a single, universal data set. This design keeps the data handling as simple as possible, allowing any authorized user to connect to any specified combination of data inputs in real time.
  • Publish/subscribe beats polling. In a publish/subscribe, event-driven model, a user makes a one-time request to connect to a data source, then gets updates whenever they occur. By contrast, polling sends regular, timed requests for data. This wastes resources when data changes are infrequent, because multiple requests might return with the same value. At the same time, polling is also inaccurate during rapid change, because a burst of several value changes may occur between polling cycles, and will be completely lost.
  • High-speed “push” data sources are most effective. The data should be pushed out to the system, and then pushed to the user. In addition to being a better security model, this approach is also more efficient. To “pull” data from a source requires polling, which takes longer and uses too much bandwidth, because each data update requires two messages: a request and a reply. Push technology only requires one message, which is more efficient, consumes less bandwidth, and also enables machine-to-machine communication.
  • Data-centric, not web-centric, design gives the best performance on the cloud. Transcoding data at the source takes time, and requires resources on the device which many smaller sensors may not have. By keeping the data in its simplest format, with no HTML or XML code, the lowest possible latency can be achieved. The raw data flows from the source, through the cloud, to the user as quickly as possible. When it arrives it can be converted to other formats, such as HTML, XML, SQL, etc. Different users, such as web browsers, databases, spreadsheets, and machine-to-machine systems can access a single data source at the point of its arrival, reducing the volume of data flow in the system.

Skkynet’s implementation

Following these principles, Skkynet’s SkkyHub™ and DataHub® provide in-plant or IoT networking speeds of just a few milliseconds over network latency, with a throughput of up to 50,000+ data changes per second. Their high level of performance is achieved by combining real-time, in-memory database technology with publish/subscribe, pushed data collection and a data-centric approach to communication.

The “Hub” technology in DataHub and SkkyHub is a real-time, in-memory, flat-file database, used in hundreds of mission-critical systems worldwide for over 15 years. Designed from the ground up for industrial data communications, the DataHub and ETK work by converting all incoming data into a simple, internal, raw-data format. This raw data can be integrated and transmitted at very high speeds.

At the plant level, the DataHub collects, integrates and redistributes process data in real time. Selected sets of data can be passed seamlessly to the IoT simply by connecting the DataHub or ETK to SkkyHub. At the cloud level, SkkyHub provides the same real-time data collection, integration, and distribution. IoT performance now approaches the actual network propagation speeds of the Internet, with virtually no added latency.

Quite honestly, we shouldn’t expect the typical IoT platform to provide this level of performance. Few, if any, were designed for the Industrial IoT. It should come as no surprise that a concept as disruptive as “Industrial Internet of Things” may require new approaches for proper implementation. And in addition to performance, industrial applications have unique security and compatibility requirements. When choosing a solid, robust platform for Industrial IoT, these are all critical factors to consider.

Tunnelling OPC DA – Know Your Options

Since OPC was introduced over fifteen years ago, it has seen a steady rise in popularity within the process control industry. Using OPC, automation professionals can now select from a wide range of client applications to connect to their PLCs and hardware devices. The freedom to choose the most suitable OPC client application for the job has created an interest in drawing data from more places in the plant. Industry-wide, we are seeing a growing need to connect OPC clients on one computer to OPC servers on other, networked computers. As OPC has grown, so has the need to network it.

The most widely-used OPC protocol for real-time data access is OPC DA.  However, anyone who has attempted to network OPC DA knows that it is challenging, at best. The networking protocol for OPC DA is DCOM, which was not designed for real-time data transfer. DCOM is difficult to configure, responds poorly to network breaks, and has serious security flaws. Using DCOM between different LANs, such as connecting between manufacturing and corporate LANs, is sometimes impossible to configure. Using OPC DA over DCOM also requires more network traffic than some networks can handle because of bandwidth limitations, or due to the high traffic already on the system. To overcome these limitations, there are various tunnelling solutions on the market. This article will look at how tunneling OPC DA solves the issues associated with DCOM, and show you what to look for in a tunnelling product.

Eliminating DCOM

The goal of tunnelling OPC DA is to eliminate DCOM, which is commonly done by replacing the DCOM networking protocol with TCP. Instead of connecting the OPC client to a networked OPC server, the client program connects to a local tunnelling application, which acts as a local OPC server. The tunnelling application accepts requests from the OPC client and converts them to TCP messages, which are then sent across the network to a companion tunnelling application on the OPC server computer. There the request is converted back to OPC DA and is sent to the OPC server application for processing. Any response from the server is sent back across the tunnel to the OPC client application in the same manner.

OPC Tunnelling

This is how most tunnellers for OPC DA work, in principle. A closer look will show us that although all of them eliminate DCOM, there are some fundamentally different approaches to tunnelling architecture that lead to distinctly different results in practice. As you review tunnelling solutions, here are four things to look out for:

  1. Does the tunnelling product extend OPC transactions across the network, or does it keep all OPC transactions local?
  2. What happens to the OPC client and server during a network break?
  3. How does the tunnel support multiple client-server connections?
  4. Does the tunnelling product provide security, including data encryption, user authentication, and authorization?

1. Extended or Local OPC Transactions?

There are two basic types of tunnelling products on the market today, each with a different approach to the problem. The first approach extends the OPC transaction across the network link, while the second approach keeps all OPC transactions local to the sending or receiving computer.

OPC Tunnelling Comparison

Extending the OPC transaction across the network means that a typical OPC client request is passed across the network to the OPC server, and the server’s response is then passed all the way back to the client. Unfortunately, this approach preserves the synchronous nature of DCOM over the link, with all of its negative effects. It exposes every OPC client-server transaction to network issues like timeouts, delays, and blocking behavior. Link monitoring can reduce these effects, but it doesn’t eliminate them, as we shall see below.

On the other hand, the local OPC transaction approach limits the client and server OPC transactions to their respective local machines. For example, when the tunnelling program receives an OPC client request, it responds immediately to the OPC client with data from a locally cached copy. At the other end, the same thing happens. The tunnelling program’s job is then to maintain the two copies of the data (client side and server side) in constant synchronization. This can be done very efficiently without interfering with the function of the client and server. The result is that the data crosses the network as little as possible, and both OPC server and OPC client are protected from all network irregularities.

2. Handling Network Issues

There is a huge variety of network speeds and capabilities, ranging from robust LANs, to WANs running over T1 lines on multi-node internets, and on down to low-throughput satellite connections. The best tunnelling products give the best possible performance over any given kind of network.

To protect against network irregularities and breaks, any good tunnelling application will offer some kind of link monitoring. Typically this done with a “heartbeat” message, where the two tunnel programs send messages to one another on a timed interval, for example every few seconds. If a reply isn’t received back within a user-specified time, the tunnelling application assumes that the network is down. The OPC client and server may then be informed that the network is broken.

In practice this sounds simple. The problem arises when you have to specify the timeout used to identify a network disconnection. If you set the timeout too long, the client may block for a long time waiting for a reply, only to discover that the network is down. On the other hand, setting the timeout too short will give you false indications of a network failure if for some reason the connection latency exceeds your expectations. The slower the network, the greater the timeout must be.

However, this balancing act is only necessary if the tunnelling product uses the extended OPC approach. A product that offers local OPC transactions still provides link monitoring, but the OPC client and server are decoupled from the network failure detection. Consequently, the timeout can be set appropriately for the network characteristics—from a few hundred milliseconds for highly robust networks to many seconds, even minutes for extremely slow networks—without the risk of blocking the OPC client or server.

How the tunnelling product informs your OPC client of the network break also varies according to the tunnel product design. Products that extend the OPC transactions generally do one of two things:

  1. Synthesize an OPC server shutdown. The OPC client receives a shutdown message that appears to be coming from the server. Unaware of the network failure, the client instead operates under the assumption that the OPC server itself has stopped functioning.
  2. Tell the client nothing, and generate a COM failure the next time the client initiates a transaction. This has two drawbacks. First the client must be able to deal with COM failures, the most likely event to crash a client. Worse yet, since OPC clients often operate in a “wait” state without initiating transactions, the client may think the last data values are valid and up-to-date, never realizing that there is any problem.

Products that provide local OPC transactions offer a third option:

  1. Maintain the COM connection throughout the network failure, and alter the quality of the data items to “Not Connected” or something similar. This approach keeps the OPC connection open in a simple and robust way, and the client doesn’t have to handle COM disconnects.

3. Support for Multiple Connections

Every tunnelling connection has an associated cost in network load. Tunnelling products that extend OPC transactions across the network may allow many clients to connect through the same tunnel, but each client sends and receives data independently. For each connected client the network bandwidth usage increases. Tunnelling products that satisfy OPC transactions locally can handle any number of clients and servers on either end of the tunnel, and the data flows across the network only once. Consequently, adding clients to the system will not add load to the network. In a resource-constrained system, this can be a crucial factor in the success of the control application.

If you are considering multiple tunnelling connections, be sure to test for cross-coupling between clients. Does a time-intensive request from a slow client block other requests from being handled? Some tunnelling applications serialize access to the OPC server when multiple clients are connected, handling the requests one by one. This may simplify the tunnel vendor’s code, but it can produce unacceptable application behavior. If one client makes a time-consuming request via the tunnel, then other clients must line up and wait until that request completes before their own requests will be serviced. All clients block for the duration of the longest request by any client, reducing system performance and increasing latency dramatically.

On the other hand, if the tunnel satisfies OPC requests locally, this situation simply does not happen. The OPC transactions do not cross the network, so they are not subject to network effects nor to serialization across the tunnel.

4. What About Security?

Whenever you get involved in networking plant data, security is a key concern. In fact, security is a primary reason for choosing tunnelling over DCOM. DCOM was never intended for use over a wide area network, so its security model is primarily designed to be easily configured only on a centrally administered LAN. Even making DCOM security work between two different segments of the same LAN can be extremely difficult. One approach to DCOM security is to firewall the whole system, so that nothing gets in or out, then relax the security settings on the computers inside the firewall. This is perhaps the best solution on a trusted network, but it is not always an option. Sometimes you have to transmit data out through the firewall to send your data across a WAN or even the Internet. In those cases, you are going to want a secure connection. Relaxed DCOM settings are simply not acceptable.

Most experts agree that there are three aspects to network security:

  • Data encryption is necessary to prevent anyone who is sniffing around on the network from reading your raw data.
  • User authentication validates each connecting user, based on their user name and password, or some other shared secret such as a private/public key pair.
  • Authorization establishes permissions for each of those authenticated users, and gives access to the appropriate functionality.

There are several options open to tunneling vendors to provide these three types of security. Some choose to develop their own security solution from the ground up. Others use standard products or protocols that many users are familiar with. These include:

SSL (Secure Socket Layer) – Provides data encryption only, but is very convenient for the user. Typically, you just check a box in the product to activate SSL data encryption. The tunneling product must provide user authentication and authorization separately.

VPN (Virtual Private Network) – Provides both encryption and authentication. VPN does not come as part of the product, per se, but instead is implemented by the operating system. The tunneling product then runs over the VPN, but still needs to handle authorization itself.

SSH (Secure Shell) Tunneling – Provides encryption and authentication to a TCP connection. This protocol is more widely used in UNIX and Linux applications, but can be effective in MS-Windows. SSH Tunnelling can be thought of as a kind of point-to-point VPN.

As none of these standard protocols covers all the three areas, you should ensure that the tunnelling product you chose fills in the missing pieces. For example, don’t overlook authorization. The last thing you need is for some enterprising young apprentice or intern to inadvertently link in to your live, production system and start tweaking data items.

How Can You Know? Test!

The concept of tunnelling OPC DA is still new to many of us. Vendors of tunnelling products for OPC DA spend a good deal of time and energy just getting the basic point across: eliminate the hassles of DCOM by using TCP across the network. Less attention has been put on the products themselves, and their design. As we have seen, though, these details can mean all the difference between a robust, secure connection, or something significantly less.

How can you know what you are getting? Gather as much information as you can from the vendor, and then test the system. Download and install a few likely products. (Most offer a time-limited demo.) As much as possible, replicate your intended production system. Put a heavy load on it. Pull out a network cable and see what happens. Connect multiple clients, if that’s what you plan to do. Configure the security. Also consider other factors such as ease of use, OPC compliance, and how the software works with other OPC-related tasks you need to do.

If you are fed up with DCOM, tunnelling OPC DA provides a very good alternative. It is a handy option that any engineer or system integrator should be aware of. At the very least, you should certainly find it an improvement over configuring DCOM. And with the proper tools and approach, you can also make it as robust and secure as your network will possibly allow.

Download White Paper (PDF)

Security for Industrial IoT

T he issue of remote data access to data from an industrial system is not new.  For years plant owners have been creating ways for their managers, operators, maintenance technicians and partners to gain access to the valuable real-time information generated by their plants.  Innovative business practices, such as globalization and just-in-time manufacturing, have driven a need to have low-latency remote access to this data, often through untrusted networks to semi-trusted end users.  For example, a manufacturer may want to share production information with a remote supplier, but not provide login access to the manufacturing system or database.

Several fundamental security problems have arisen from this need for remote real-time access:

Exposure to attack from the Internet.  When a plant allows a user to access the system remotely, it naturally creates an attack surface for malicious actors to attempt to also gain access to the system.

Exposure to attack from the IT network.  If a plant allows a user to access the system remotely, it also needs to expose itself to the network infrastructure of the company’s IT system.  Generally the plant network is a subnet within the larger company network.  Entry into the plant will be via the IT infrastructure.  Attacks from the IT network are less likely, but some kinds of problems in the IT network could disrupt normal network data flow on the plant network.  It is wise to separate the IT and plant networks as much as possible.

Remote access beyond the required data.  Giving a remote user access to a desktop, such as Microsoft RDP, means that a curious or malicious user can try to gain access to programs and data beyond what was intended.  Even if the user is trustworthy, but the user’s system is compromised, a remote access program becomes a point of attack into the plant system.

Exposure of a portion of the plant network.  Some plants have chosen to use VPN connections to limit Internet attacks.  However a VPN effectively puts all participants onto a local sub-network, which gives the participating machines effectively local network access to one another.  Compromising any machine on the network (such as a remote supplier) produces an opportunity for an attacker to hack into the plant network via the VPN.

High cost.  VPNs, RDP entry points, firewalls and routers require ongoing attention and effort from IT personnel.  This cost increases as the number of participants in the system increases.  Plants that do not devote the resources to IT are more likely to implement their remote data access less securely.

How can Skkynet Help?

Skkynet’s unique solution, SkkyHub™, provides a mechanism for dealing with all of the traditional security problems in remote plant data access.

NO Exposure to attack from the Internet.  Users of Skkynet’s SkkyHub install an agent within the plant that collects plant information and pushes it out to Skkynet’s real-time data servers.  Since this connection is outbound, from the plant to the Internet, there is no requirement for the plant to open any inbound TCP ports, and thus the plant never exposes itself to attack from the Internet.

NO Exposure to attack from the IT network.  It is good practice to isolate the plant from the IT network using a router that allows only out-bound connections from the plant into the IT network.  Using the SkkyHub service, the IT network can be treated as untrusted by the plant, and a firewall placed between the two that allows no inbound connections into the plant.  Disruptions on the IT network will not affect data flow within the plant network, though they could affect data flow from the plant to the Skkynet service.  The plant remains secure and functional, even if remote data access is degraded.

We designed a solution to address all traditional security problems in remote plant data access.

NO Remote access beyond the required data.  Using SkkyHub, the plant decides which data to make available remotely.  The plant engineer can choose any subset of the data produced by his plant, and make it available to remote users in data groups.  Each group has its own read/write permissions as well as limits based on the remote user name and the IP address from which the remote user is connecting.  The remote user has no mechanism to extend his access to data beyond what the plant engineer allows him.

NO Exposure of a portion of the plant network.  The SkkyHub service does not create a VPN, or any kind of general-purpose network construct.  It only makes a TCP connection for the transmission of data.  Consequently, no participating machine is ever exposed to any other via a local network or VPN. The data can be routed through network proxies, data proxies and DMZ servers to ensure that the plant network never has a direct connection to the Internet, even for outbound connections.  Participating systems in the Skkynet service never need to share a network.

NO High cost.  SkkyHub eliminates many security hurdles, thereby substantially reducing the IT cost of implementation.  Often, a plant can participate in the Skkynet service without any change to existing IT infrastructure.  The plant has no need to hire extra IT expertise or to install extra networking equipment.  Often the only cost is for configuration of the Skkynet agent at the plant and the Skkynet service itself.

Skkynet’s technology follows good industry practice by using SSL connections for all Internet traffic, and by validating the trust chains of certificates.  This enhances your security for Industrial IoT, and protects you against network snooping and against man-in-the-middle attacks.