Posts

How IoT Can Revolutionize the Oil and Gas Industry

Have you ever driven past a gas field or oil refinery at night, and seen the blazing orange fires raging atop the gas flare stacks?  What a waste, eh?  How much money must be going up in smoke?  How much CO2 is the oil and gas sector needlessly spewing into the atmosphere?  It makes you want to pipe that gas to your own house and cut your monthly heating bills, if nothing else.  Surely there must be some way to collect that gas, saving precious resources and the environment at the same time.

Solving the Problem

Perhaps this decades-old problem can be solved—with the help of the IoT (Internet of Things).  In a recent article, From Measurement to Management: How IoT and Cloud-Based Data is Changing the Oil and Gas Industry, Adam Chapman, Global Director of Marketing at Fluenta, lists in detail the waste and damaging effects of gas flaring, and then shows how the IoT can transform the status quo through remote assest management.

“IoT applications can not only support measurement, but enable businesses to manage more effectively in hostile and hazardous locations,” he said.  “For the oil and gas industry, IoT connectivity will enable organisations to control risk more effectively, and support the necessary transition from measurement to management of greenhouse gasses as the industry addresses the problem of emissions.”

Opportunity

There is a big opportunity here.  Chapman says that the total amount of gas flared every year is roughly equal to 30% of the gas consumed in the European Union—over 150 billion cubic meters.  In Africa, where much of the flaring takes place, it adds up to about 1/2 of the total energy use for the continent.  Capitalizing on this missed opportunity can be done through proper asset management. “When applied effectively, remote asset management through connected infrastructure will revolutionise oil and gas operations,” says Chapman.

Gathering real-time data using the IoT can cut manpower costs of offshore platforms, provide input for continuous emission monitoring systems, and help centralize Big Data repositories for company-wide comparitive analysis, Chapman explains. “It is cloud technology and the ubiquity of internet connectivity that fundamentally brings significant change to remote asset management.”

Appropriate Technolgy

Offshore platforms and other remote industrial assets call for specialized cloud technology.  Skkynet provides not only the real-time data required by an industrial asset management system, but it also ensures secure connectivity and robust performance that is fully compatible with cellular and satellite technology commonly used in these kinds of applications.

Chapman says, “The combination of accurate, real-time information on remote assets and cloud technology can have a significant positive impact on moving an oil and gas operation from a monitoring approach to a management approach.”

Connecting the Worlds of IT and OT

Ever since the dawn of computing for commerce and industry, there has been a wide gap between the world of IT (Information Technology) and OT (Operations Technology).  Most of us are more familiar with IT—crunching numbers for financial applications, building databases for personnel records and corporate assets, and printing out sales reports, monthly earnings, and year-end statements.  The world of OT is more remote and esoteric—hidden behind firewalls and DMZs, sometimes on completely independent networks, mission-critical systems oversee the real-time processes that control a company’s production equipment and machinery.

Now, with the advent of Industry 4.0 and the Industrial IoT, these two worlds are being brought together.  In a recent article, The Internet of Things: Bridging the OT/IT divide, John Pepper, CEO and Founder of Managed 24/7, makes the case that the business value of operational data will be lost unless IT and OT learn to co-operate.  He said, “Unless organisations actively bridge the gap between OT and IT, the real operational benefits of the digital business will be lost.”

A risk of losing the prize

According to their research, companies are jumping on the IoT bandwagon and increasing their number of networked devices, but due to a lack of an overall policy to bridge the IT/OT gap, there is a real risk of losing the prize.  Critical OT information that has been unknown in the past is now becoming available, but only to those who know how to connect to it, and are willing to do so.

“Indeed, while the vast majority of new control systems used in buildings and factories – from water pumps to energy systems – include an Ethernet connection,” says Pepper, “few organisations are actively using this real-time insight to support CxO decision-making.”

Pepper’s call for deeper integration between the real-time data flowing through the OT world and the analytical capabilities of the IT world is a need that Skkynet was created to meet.  The predictive technologies that Pepper recommends can be realized and fully supported by Skkynet’s Industrial IoT technologies.  The vision of end-to-end monitoring and self-healing technologies that Pepper shares can become reality when we effectively connect the two worlds of IT and OT.

Case Study: Minera San Cristobal, Bolivia – 2

Using DataHub software to integrate video cameras and expert systems

Minera San Cristobal, owned by Apex Silver and Sumitomo Corporation, is one of the largest silver-zinc-lead mining projects in the world. The mine, located in the Potosi district of southwestern Bolivia is expected to produce approximately 450 million ounces of silver, 8 billion pounds of zinc, and 3 billion pounds of lead.

As described in a companion article, the engineers at the San Cristobal mill used DataHub® software to connect their DeltaV Professional Plus SCADA system to an SQL Server database in the corporate offices. After witnessing the success of that project, the engineers decided to connect their two SGS expert systems to DeltaV in a similar way.

“We saw how well DataHub software transported OPC data across the network,” said Sr. Mario Mendizabal, Production Engineer at Minera San Cristobal, “so we thought it could help us connect to our Grinding and Flotation Expert Systems.”

In the San Cristobal mill the ore extracted from the mine is crushed, ground, and refined through flotation process to yield concentrates of silver, zinc, and lead, which are then shipped abroad for final smelting. These processes are monitored and controlled using the DeltaV system.

Although the DeltaV system allows an operator to input setpoints and other values directly into the system, Sr. Mendizabal and his team wanted to apply an SGS Advanced Systems application to optimize two critical parts of the mineral refining process: grinding and flotation. Each expert system runs on a separate sever. To add to the challenge, the Flotation Expert System also requires real-time data input from two banks of 25 video cameras. These cameras monitor the size, speed, and other qualities of the bubbles as they lift the valuable mineral particles to the surface, where they can be skimmed off as foam. There is one bank of cameras for the zinc flotation circuit, and another for lead. Each of these five systems-DeltaV, the Grinding Expert System, the Flotation Expert System, and the two camera systems-needed to be connected in real time.

Fortunately, each system had an OPC server. What was needed was a way to bridge the OPC servers, aggregate their data streams, and tunnel/mirror the data across the network for the other systems. Based on his previous success using DataHub software, Sr. Mendizabal chose to apply it to this task. He already had a DataHub instance connected to the DeltaV system. So he just installed a DataHub instance on each of the SGS servers, and each of the camera system servers. Then he connected those four DataHub instances to the main DataHub instance running on the DeltaV server.

“It didn’t take long at all to get the system configured,” said Sr. Mendizabal. “Since it is tunnel/mirroring across the network, we avoided DCOM settings and networking issues entirely. The connection is completely secure, and rock-solid.”

When the expert systems are switched on, the plant data flows from DeltaV to the Grinding Expert System and the Flotation Expert System. These systems continuously and intelligently adjust the values of the setpoints, and send them back in real-time to DeltaV, which passes them along to the relevant process. To make its calculations, the Flotation Expert System also takes into account the real-time data that is streaming in from the two Video Camera Systems.

“It is very important to know that when the expert system is controlling the plant we are trusting our production to DataHub software,” said Sr. Mendizabal. “We are very pleased with its performance, and highly recommend it for this kind of mission-critical work.”

ExxonMobil Seeks Open Automation Solutions

At the most recent ARC Industry Forum in Orlando, ExxonMobil announced that they are not satisfied with business as usual when it comes to industrial automation, and they are looking for something far superior to what is currently being offered.  On January 14, 2016, ExxonMobil announced that they had awarded a contract to Lockheed Martin to serve as the systems integrator in the early stage development of a next-generation open and secure automation system for process industries.  Lockheed Martin is tasked to seek out the architecture and tools needed for an “open, standards-based, secure and interoperable control system” that can be seamlessly integrated with existing facilities, as well as new and future systems.  ExxonMobil wants the hardware and software components to be commercially available and able to function in all current DCS markets.

Rather than simply replace their aging systems with the current state of the art, which is expensive, inflexible, and closed, ExxonMobil wants to leverage new, open, IoT, wireless, and cloud technologies to cut costs, enhance security, and reduce development time. As with other, adjacent areas of technology, they want to see a step-change improvements, not incremental or bolted-on changes to obsolete architectures.

Originally presented at Industry Day on January 26, 2016

Their vision for open automation is standards-based, secure, and interoperable, which will:

  1. Promote innovation & value creation
  2. Effortlessly integrate best-in-class components
  3. Afford access to leading-edge capability & performance
  4. Preserve the asset owner’s application software
  5. Significantly lower the cost of future replacement
  6. Employ an adaptive intrinsic security model

This vision reads like a list of Skkynet connectivity solutions features and benefits:

  1. SkkyHub, DataHub, and the ETK foster innovation and value creation by providing open-standards, real-time data connectivity for hardware and software from almost any vendor.
  2. These Skkynet tools allow users to integrate data from virtually any components.
  3. This kind of real-time data integration enables each component in turn to perform at its highest capacity.
  4. Any generation of equipment, from legacy to state-of-the-art, can be integrated.
  5. Connecting modules can be replaced, and the system itself gets continually updated.
  6. Connections from the DataHub or ETK to SkkyHub are secure by design.

We are currently in communication with Lockheed Martin, and bringing these advantages to ExxonMobil’s attention. We share their vision, and offer tested, verified, working solutions.

Tunnelling OPC DA – Know Your Options

Since OPC was introduced over fifteen years ago, it has seen a steady rise in popularity within the process control industry. Using OPC, automation professionals can now select from a wide range of client applications to connect to their PLCs and hardware devices. The freedom to choose the most suitable OPC client application for the job has created an interest in drawing data from more places in the plant. Industry-wide, we are seeing a growing need to connect OPC clients on one computer to OPC servers on other, networked computers. As OPC has grown, so has the need to network it.

The most widely-used OPC protocol for real-time data access is OPC DA.  However, anyone who has attempted to network OPC DA knows that it is challenging, at best. The networking protocol for OPC DA is DCOM, which was not designed for real-time data transfer. DCOM is difficult to configure, responds poorly to network breaks, and has serious security flaws. Using DCOM between different LANs, such as connecting between manufacturing and corporate LANs, is sometimes impossible to configure. Using OPC DA over DCOM also requires more network traffic than some networks can handle because of bandwidth limitations, or due to the high traffic already on the system. To overcome these limitations, there are various tunnelling solutions on the market. This article will look at how tunneling OPC DA solves the issues associated with DCOM, and show you what to look for in a tunnelling product.

Eliminating DCOM

The goal of tunnelling OPC DA is to eliminate DCOM, which is commonly done by replacing the DCOM networking protocol with TCP. Instead of connecting the OPC client to a networked OPC server, the client program connects to a local tunnelling application, which acts as a local OPC server. The tunnelling application accepts requests from the OPC client and converts them to TCP messages, which are then sent across the network to a companion tunnelling application on the OPC server computer. There the request is converted back to OPC DA and is sent to the OPC server application for processing. Any response from the server is sent back across the tunnel to the OPC client application in the same manner.

OPC Tunnelling

This is how most tunnellers for OPC DA work, in principle. A closer look will show us that although all of them eliminate DCOM, there are some fundamentally different approaches to tunnelling architecture that lead to distinctly different results in practice. As you review tunnelling solutions, here are four things to look out for:

  1. Does the tunnelling product extend OPC transactions across the network, or does it keep all OPC transactions local?
  2. What happens to the OPC client and server during a network break?
  3. How does the tunnel support multiple client-server connections?
  4. Does the tunnelling product provide security, including data encryption, user authentication, and authorization?

1. Extended or Local OPC Transactions?

There are two basic types of tunnelling products on the market today, each with a different approach to the problem. The first approach extends the OPC transaction across the network link, while the second approach keeps all OPC transactions local to the sending or receiving computer.

OPC Tunnelling Comparison

Extending the OPC transaction across the network means that a typical OPC client request is passed across the network to the OPC server, and the server’s response is then passed all the way back to the client. Unfortunately, this approach preserves the synchronous nature of DCOM over the link, with all of its negative effects. It exposes every OPC client-server transaction to network issues like timeouts, delays, and blocking behavior. Link monitoring can reduce these effects, but it doesn’t eliminate them, as we shall see below.

On the other hand, the local OPC transaction approach limits the client and server OPC transactions to their respective local machines. For example, when the tunnelling program receives an OPC client request, it responds immediately to the OPC client with data from a locally cached copy. At the other end, the same thing happens. The tunnelling program’s job is then to maintain the two copies of the data (client side and server side) in constant synchronization. This can be done very efficiently without interfering with the function of the client and server. The result is that the data crosses the network as little as possible, and both OPC server and OPC client are protected from all network irregularities.

2. Handling Network Issues

There is a huge variety of network speeds and capabilities, ranging from robust LANs, to WANs running over T1 lines on multi-node internets, and on down to low-throughput satellite connections. The best tunnelling products give the best possible performance over any given kind of network.

To protect against network irregularities and breaks, any good tunnelling application will offer some kind of link monitoring. Typically this done with a “heartbeat” message, where the two tunnel programs send messages to one another on a timed interval, for example every few seconds. If a reply isn’t received back within a user-specified time, the tunnelling application assumes that the network is down. The OPC client and server may then be informed that the network is broken.

In practice this sounds simple. The problem arises when you have to specify the timeout used to identify a network disconnection. If you set the timeout too long, the client may block for a long time waiting for a reply, only to discover that the network is down. On the other hand, setting the timeout too short will give you false indications of a network failure if for some reason the connection latency exceeds your expectations. The slower the network, the greater the timeout must be.

However, this balancing act is only necessary if the tunnelling product uses the extended OPC approach. A product that offers local OPC transactions still provides link monitoring, but the OPC client and server are decoupled from the network failure detection. Consequently, the timeout can be set appropriately for the network characteristics—from a few hundred milliseconds for highly robust networks to many seconds, even minutes for extremely slow networks—without the risk of blocking the OPC client or server.

How the tunnelling product informs your OPC client of the network break also varies according to the tunnel product design. Products that extend the OPC transactions generally do one of two things:

  1. Synthesize an OPC server shutdown. The OPC client receives a shutdown message that appears to be coming from the server. Unaware of the network failure, the client instead operates under the assumption that the OPC server itself has stopped functioning.
  2. Tell the client nothing, and generate a COM failure the next time the client initiates a transaction. This has two drawbacks. First the client must be able to deal with COM failures, the most likely event to crash a client. Worse yet, since OPC clients often operate in a “wait” state without initiating transactions, the client may think the last data values are valid and up-to-date, never realizing that there is any problem.

Products that provide local OPC transactions offer a third option:

  1. Maintain the COM connection throughout the network failure, and alter the quality of the data items to “Not Connected” or something similar. This approach keeps the OPC connection open in a simple and robust way, and the client doesn’t have to handle COM disconnects.

3. Support for Multiple Connections

Every tunnelling connection has an associated cost in network load. Tunnelling products that extend OPC transactions across the network may allow many clients to connect through the same tunnel, but each client sends and receives data independently. For each connected client the network bandwidth usage increases. Tunnelling products that satisfy OPC transactions locally can handle any number of clients and servers on either end of the tunnel, and the data flows across the network only once. Consequently, adding clients to the system will not add load to the network. In a resource-constrained system, this can be a crucial factor in the success of the control application.

If you are considering multiple tunnelling connections, be sure to test for cross-coupling between clients. Does a time-intensive request from a slow client block other requests from being handled? Some tunnelling applications serialize access to the OPC server when multiple clients are connected, handling the requests one by one. This may simplify the tunnel vendor’s code, but it can produce unacceptable application behavior. If one client makes a time-consuming request via the tunnel, then other clients must line up and wait until that request completes before their own requests will be serviced. All clients block for the duration of the longest request by any client, reducing system performance and increasing latency dramatically.

On the other hand, if the tunnel satisfies OPC requests locally, this situation simply does not happen. The OPC transactions do not cross the network, so they are not subject to network effects nor to serialization across the tunnel.

4. What About Security?

Whenever you get involved in networking plant data, security is a key concern. In fact, security is a primary reason for choosing tunnelling over DCOM. DCOM was never intended for use over a wide area network, so its security model is primarily designed to be easily configured only on a centrally administered LAN. Even making DCOM security work between two different segments of the same LAN can be extremely difficult. One approach to DCOM security is to firewall the whole system, so that nothing gets in or out, then relax the security settings on the computers inside the firewall. This is perhaps the best solution on a trusted network, but it is not always an option. Sometimes you have to transmit data out through the firewall to send your data across a WAN or even the Internet. In those cases, you are going to want a secure connection. Relaxed DCOM settings are simply not acceptable.

Most experts agree that there are three aspects to network security:

  • Data encryption is necessary to prevent anyone who is sniffing around on the network from reading your raw data.
  • User authentication validates each connecting user, based on their user name and password, or some other shared secret such as a private/public key pair.
  • Authorization establishes permissions for each of those authenticated users, and gives access to the appropriate functionality.

There are several options open to tunneling vendors to provide these three types of security. Some choose to develop their own security solution from the ground up. Others use standard products or protocols that many users are familiar with. These include:

SSL (Secure Socket Layer) – Provides data encryption only, but is very convenient for the user. Typically, you just check a box in the product to activate SSL data encryption. The tunneling product must provide user authentication and authorization separately.

VPN (Virtual Private Network) – Provides both encryption and authentication. VPN does not come as part of the product, per se, but instead is implemented by the operating system. The tunneling product then runs over the VPN, but still needs to handle authorization itself.

SSH (Secure Shell) Tunneling – Provides encryption and authentication to a TCP connection. This protocol is more widely used in UNIX and Linux applications, but can be effective in MS-Windows. SSH Tunnelling can be thought of as a kind of point-to-point VPN.

As none of these standard protocols covers all the three areas, you should ensure that the tunnelling product you chose fills in the missing pieces. For example, don’t overlook authorization. The last thing you need is for some enterprising young apprentice or intern to inadvertently link in to your live, production system and start tweaking data items.

How Can You Know? Test!

The concept of tunnelling OPC DA is still new to many of us. Vendors of tunnelling products for OPC DA spend a good deal of time and energy just getting the basic point across: eliminate the hassles of DCOM by using TCP across the network. Less attention has been put on the products themselves, and their design. As we have seen, though, these details can mean all the difference between a robust, secure connection, or something significantly less.

How can you know what you are getting? Gather as much information as you can from the vendor, and then test the system. Download and install a few likely products. (Most offer a time-limited demo.) As much as possible, replicate your intended production system. Put a heavy load on it. Pull out a network cable and see what happens. Connect multiple clients, if that’s what you plan to do. Configure the security. Also consider other factors such as ease of use, OPC compliance, and how the software works with other OPC-related tasks you need to do.

If you are fed up with DCOM, tunnelling OPC DA provides a very good alternative. It is a handy option that any engineer or system integrator should be aware of. At the very least, you should certainly find it an improvement over configuring DCOM. And with the proper tools and approach, you can also make it as robust and secure as your network will possibly allow.

Download White Paper (PDF)

Skkynet’s ETK for the Renesas Synergy™ Platform Now Available at No Cost

Engineers and developers using the Renesas Synergy Platform can now connect their projects to the Industrial IoT quickly, securely, and free of any royalty or development license fees.

Mississauga, Ontario, February 2, 2016 – Skkynet Cloud Systems, Inc. (“Skkynet” or “the Company”) (OTCQB:SKKY), a global leader in real-time cloud information systems, announces that Skkynet’s ETK (Embedded Toolkit) for the Renesas Synergy™ Platform is now available for download from the Renesas Synergy Gallery, as part of the Renesas Synergy Verified Software Add-on (VSA) Program. The ETK is offered by Skkynet free of any royalty or development-license fees, allowing engineers and developers to quickly and securely enable their projects for the IoT, while providing a platform to earn a recurring revenue stream.

“Using the ETK gets Renesas Synergy developers up and running on the IoT right away on a robust, secure, end-to-end system” said Paul Thomas, President of Skkynet. “They can send data from their project to our SkkyHub™ service and view their data in a web browser, or link to other devices. They can also connect via the ETK to our DataHub® industrial middleware, and link their project to virtually any in-plant industrial system.”

Last month Renesas announced the mass production of the Renesas Synergy Platform, which is an integration of qualified software, scalable microcontrollers (MCUs), hardware solutions and tools designed to reduce development time, lower the total cost of ownership, and eliminate obstacles that engineers face when developing products for the IoT.  The Renesas Synergy VSA Program was launched as a way to broaden the value of the Synergy Platform and give customers access to specialized software like Skkynet’s ETK that is already verified as compatible with the Synergy Software Package (SSP).

Skkynet’s SkkyHub service allows industrial and embedded systems to securely network live data in real time from any location. It enables bidirectional supervisory control, integration and sharing of data with multiple users, and real-time access to selected data sets in a web browser. The service is capable of handling over 50,000 data changes per second per client, at speeds of just microseconds over Internet latency. Secure by design, it requires no VPN, no open firewall ports, no special programming, and no additional hardware.

Skkynet’s Cogent DataHub industrial middleware solution connects to virtually any industrial system using standard protocols such as OPC, Modbus, TCP, and ODBC to support OPC networking, server-server bridging, aggregation, data logging, redundancy, and web-based HMI.

The ETK, DataHub, and SkkyHub will be demonstrated live by Renesas at ATX West 2016 in Anaheim, California February 9-11, 2016 (Hall B, Booth #4594), as well as at the Nineteenth Annual ARC Industry Forum, “Industry in Transition: Navigating the New Age of Innovation” in Orlando, Florida, February 8-11, 2016, hosted by the Arc Advisory Group.

About Skkynet Cloud Systems, Inc.

Skkynet Cloud Systems, Inc. (OTCQB:SKKY) is a global leader in real-time cloud information systems. The Skkynet Connected Systems platform includes the award-winning SkkyHub™ service, DataHub®, WebView™, and embedded toolkit software. The platform enables real-time data connectivity for industrial, embedded, and financial systems, with no programming required. Skkynet’s platform is uniquely positioned for the “Internet of Things” and “Industry 4.0” because unlike the traditional approach for networked systems, SkkyHub is secure-by-design. Customers include Microsoft, Caterpillar, Siemens, Metso, ABB, Honeywell, IBM, GE, Statoil, Goodyear, BASF, E.ON, Bombardier, and the Bank of Canada. For more information, see http://skkynet.com.

Safe Harbor:

This news release contains “forward-looking statements” as that term is defined in the United States Securities Act of 1933, as amended and the Securities Exchange Act of 1934, as amended. Statements in this press release that are not purely historical are forward-looking statements, including beliefs, plans, expectations or intentions regarding the future, and results of new business opportunities. Actual results could differ from those projected in any forward-looking statements due to numerous factors, such as the inherent uncertainties associated with new business opportunities and development stage companies. We assume no obligation to update the forward-looking statements. Although we believe that any beliefs, plans, expectations and intentions contained in this press release are reasonable, there can be no assurance that they will prove to be accurate. Investors should refer to the risk factors disclosure outlined in our annual report on Form 10-K for the most recent fiscal year, our quarterly reports on Form 10-Q and other periodic reports filed from time-to-time with the Securities and Exchange Commission.