Posts

IIoT – Ushering in a New Era of Predictive Maintenance (PdM)

In our previous blog we looked at how the Industrial IoT (IIoT) is transforming the field of Asset Performance Management (APM).  One key aspect of APM is the ability to maintain physical assets like hardware and machinery.  As the IIoT bolsters APM, we can expect it to also impact plant maintenance systems in a positive way.  In fact, experts in the field are suggesting that the IIoT is going to usher in a new era, an era of wide-scale Predictive Maintenance (PdM) for industrial systems.

Until now, there were essentially two approaches to equipment maintenance: run-to-failure, or fix it even if it “ain’t broke.”  These two approaches were necessary because it was often difficult or expensive to figure out when a machine would break down.  Both approaches were costly, though, because they meant either wasting time and materials repairing or replacing parts that still had plenty of life left, or shutting down the plant when something broke―or possibly both.

An Alternative – PdM

The alternative is Predictive Maintenance (PdM), which means regularly monitoring machines, and repairing them just before they break.  Regular testing can be done by having staff periodically walk around the plant and take readings on portable instruments.  This has value, but more effective is continual monitoring―installing sensors directly onto machines to detect changes in vibration, temperature, and sound on the machine, and connecting them by wire to APM software.  Until recently this was prohibitively costly for all but the most expensive machinery.  But the advent of tiny, low-cost, wireless sensors, cloud computing, and new AI (Artificial Intelligence) solutions―in other words, IIoT technologies―have put the cost of efficient PdM within reach of far more companies, to be used on far more equipment, than ever before.

“In today’s competitive industrial world, predictive maintenance (PdM) is no longer a nice-to-have; it has become a necessity,” says Abhinav Khushraj of Petasense. “Traditional PdM methods have several limitations. However advancements in wireless, cloud and AI technology are disrupting the way PdM has been done in recent decades.”

Valuable as this new implementation of PdM may be, it’s possible that it’s just the tip of the iceberg.  Low-cost, IIoT-based PdM could cause a shift in thinking about preventative maintenance that could affect the whole enterprise, according to Eitan Vesely of Presenso.  He identifies a number of trends, such as PdM becoming more holistic in scope, covering multiple assets, and becoming a source of top-line growth, all thanks to Industrie 4.0 and IIoT.

In a recent blog Vesely said, “With Industry 4.0, executives are starting to consider the impact on top-line revenue from their big-data investments. With the shift from Industry 3.0 to Industry 4.0, metrics such as improved uptime and higher-production yield rates are replacing downtime as the driving force for investments in this technology category.”

Infrastructure is Needed

All of these initiatives require infrastructure.  The IIoT data that powers this far-reaching PdM must be transmitted and received securely, robustly, and quickly.  The wide variety of sensors with their multiple data protocols need to connect within the plant and, via gateway or directly, to the cloud.  Analytical engines rely on seamless connections to real-time streaming data.  Every step is needed, every step adds value.  As the new vision of IIoT-powered PdM begins take shape, Skkynet is there, helping to make it happen.

Early Adopters Win Digital Dividends

Does the early bird really get the worm? According to a recent Harvard Business Review report they do, if that bird is an early adopter of technology. The report from HBR Analytic Services titled, The Digital Dividend – First Mover Advantage, states that according to their survey, companies that adopt the newest technologies are more likely to grow their revenue and improve their market position.

Executives, top-level and mid-level managers from hundreds of medium- and large-sized companies in the USA and around the world responded to the survey. Each company self-categorized its corporate posture as “IT pioneer” (34%), “follower” (35%), or “cautious” (30%). Each participant was questioned on the degree of adoption in their company of what the report calls the “Big Five” technologies: mobile computing, social media and networking, cloud computing, advanced analytics, and machine-to-machine (M2M) communications.

The results show that early adopters of technology experience the most growth. Overall, the IT pioneer companies grew twice as much as the followers, and three times as much as the cautious. Linking this growth to the adoption of new technologies, the report states that over half of the IT pioneers had made technology-powered changes to their business models or to the products and services they sell. On the other hand, less than a third of the followers implemented such changes, and only about one-tenth of the cautious did the same.

A Holistic Solution

Tony Recine, Chief Marketing Officer of Verizon Enterprise Solutions, the company that sponsored the report, made this comment: “The value of these new technologies lies not in what they can achieve on their own, but in their combined power as a holistic solution.”

Indeed. That is our vision as well. Each one of the “Big Five” technologies has significant value, and combining them offers a huge advantage to any early adopter ready to move quickly ahead of his peers. For example, linking machines to other machines, passing their data to the cloud, running real-time analytics on it, and putting the results into the hands of any user with a smart phone is no longer a futuristic vision, but reality. Consider the following scenarios.

1) A machine operator on the factory floor in Germany gets an alarm on his tablet PC. As he walks towards the problem area, he runs live analysis on the data coming in from the system, comparing it to historical data, and doing an archive search on similar scenarios. He also checks with his colleages at branch plants in the UK and Canada, and looks at how their systems are performing at that time. Based on all these inputs, he can make a more informed decision about how to respond to the alarm.

2) Every few seconds each panel on a large, interconnected installation of solar arrays sends details about cloud cover and other local weather conditions, as well as the amount of power generated at that moment. This data is pooled and analyzed by big-data applications to determine the cost and output of any part of the system in real time. Management and customers can view up-to-the-second output trends and statistics for their area in a web browser or phone.

3) A water resources management company relays pump-station and tank-level data from small local utility companies to remote agricultural facilities using that water for irrigation. Farm managers and utility executives alike are given access to the relevant data for their systems, allowing them to monitor the entire supply and usage matrix, and collaborate on adjustments in real time, when necessary.

This kind of scenario, and many more, are possible. The technology is here. Secure access to in-house, remote, and M2M data via the cloud, redistributed to qualified users anywhere, is what the Secure Cloud Service is all about. Now it’s just a question of who adopts it, and when. And as we have learned from this latest Harvard Business Review report, early adopters tend to win.

Sidestepping Delays

There is a famous game in the business world called the Beer Game.  Developed at MIT’s Sloan School of Management, this game gives a taste of reality to novice and experienced managers alike.  In addition to lessons in management, the Beer Game illustrates an interesting similarity between business systems and industrial processes, and suggests to me how real-time data cloud services may be able to help business sidestep costly delays.

In the Beer Game, players representing retailers, wholesalers, distributors and producers of beer are responsible for keeping up with customer orders.  Everything goes pretty smoothly until there is a sudden increase in demand.  Due to a built-in request-and-response time delay at each level of supply, it takes a while for the increased orders to reach the factory, and still longer for the new supplies of beer to reach the retailers.

The delay in supply shipments causes a temporary shortage for retailers, so they keep sending in large orders for beer.  Eventually, when truckloads of beer finally start to arrive, their supplies overshoot demand, and the retailers now have to cut orders dramatically.  But the beer keeps coming.  Oscillations between supply and demand ensue, creating customer dissatisfaction, wasted resources, and loss of profit.  Hard feelings often arise between the game players at the various levels in the supply chain, as each blames the others for the losses.

The problem is, most players simply keep ordering beer as long as their customers or downstream distributors keep clamoring for it, not realizing the mistake until it is too late.  “If there were no time delays, this strategy would work well,” said MIT Professor John D. Sterman, who has run the game for many years.  So, in a way, the culprit here is the time lag.  Let’s see how a real-time approach might change the picture.

In the real-time world of industrial control, this is a familiar scenario.  If you have, for example, an oven running at a set temperature, and then turn a dial to raise the temperature, it takes a bit of time for the system to respond.  If it is tuned well, the heating mechanism will quickly bring the oven up to the newly set temperature, and maintain that setting.  If not, it may respond slowly, and possibly overshoot and undershoot the setting a few times until it finally stablizes on the new temperature.

This type of behavior can be plotted on a graph.  Here are a few examples:


In these images, the red line represents the newly set temperature, the green line is the ouput from the heating mechanism, and the blue line is the actual temperature inside the oven.

Poor Response shows delayed communication and overreaction.  Notice that the heating mechanism output (green line) doesn’t start decreasing until the oven temperature hits the proper setting.  Poor feedback between the actual temperature in the oven and the heating mechanism causes a number of oscillations.

So-so Response is better, but there is still some overreaction.

Quick Response is the best.  A combination of an immediate and strong initial response with a tightly coupled feedback loop between the heating mechanism and the oven temperature means that the new setting is achieved rapidly, with minimum waste.

How does this apply to the Beer Game and the business world?  Using real-time cloud technology, it should be possible to connect all the data related to the beer sales, ordering, distribution and production into a single, seamless flow.  Imagine if each player in the game had a window into actual production figures and supply inventories at every level, updated in real time.  The factory could see immediate spikes in demand and retailers could gauge supply levels, while distributors and wholesalers could monitor the flow of orders and shipments up and down the supply chain.

Of course, there will always be time constraints in the actual beer shipments.  But that doesn’t mean we have to settle for frustrations originating in the paper-based systems of the last century.  With a real-time cloud approach, many “inevitable” delays can simply be sidestepped.

Cloud Economics 5: Data on Tap

How is cloud computing like buying a cup of coffee?  Joe Weinman uses a clever analogy of purchasing a cup of coffee to explain some of the factors that go into providing faster, and therefore more valuable, cloud services.  I thought it would be fun to see how his coffee-buying model may or may not fit with the economics of real-time cloud computing.

To speed up the process of getting a cup of coffee from your local coffee shop (and data from your cloud service), Wienmann suggests several options for the service provider and customer:

  • Optimize the process by streamlining and reducing the number of tasks that the coffee shop staff need to carry out to prepare a cup of coffee.  In cloud computing, this equates to optimizing algorithms and implementing other processing efficiencies on the server.
  • Use more resources, such as hiring more staff behind the counter to make and pour coffee, so that multiple customers can be served simultaneously.  Cloud service providers do something similar when they provide parallel processing for large computing tasks.
  • Reduce latency by opening a coffee shop closer to the customer’s office, or by the customer moving closer to the shop.  We see this playing out in certain situations when cloud customers who require ultra-high-speed performance actually move their physical location to be closer to the data center.
  • Reduce round trips for coffee by picking up a whole trayful of coffees on each trip to the shop.  In the similar way, it is sometimes possible to send multiple requests or receive multiple replies in a single transaction with the cloud.

In addition to these four options, Wienmann suggests a rather drastic alternative: Eliminate the need for the transaction altogether.  In the analogy, that would mean stop buying coffee from the shop, and either make it yourself at the office or do without.  This equates to not using cloud services at all.

Data flowing from a faucet.What is the real-time approach?  Data on tap.  Instead of making round-trips to the coffee shop every few hours or days, just pipe the coffee directly to the office, and let it flow past your desk, always hot and fresh, ready to be scooped up and savored. Just dip your cup into the stream.

A key conceptual shift takes place when we implement real-time cloud computing.  There is no need for transactions to receive data.  The cycle of request-process-reply gets replaced by an always-on stream of data.  Thus there is minimal delay.

In the physical world this would be considered wasteful.  I can see my grandfather, who lived through the Great Depression, recoiling in horror at the thought of those gallons per minute of undrunk coffee going down the drain somewhere.  But real-time data gets generated fresh all the time, and most of it quickly vaporizes into thin air anyway.  Best to put it into the hands of someone who can use it.  Data on tap means there is actually less waste.

But how, some may ask, can you possibly contain it?  How do you get a grip?  How can you analyze a moving target?  What if a highly valuable factoid escapes my cup and flows off into oblivion?

Different tools and skills are necessary for working with streaming data.  High-speed, in-line analytics that can keep up with the incoming flow will help decision-makers respond to ever-changing conditions.  Super-efficient real-time data historians that capture every event, large or small, will provide quick access to minute details occurring on a millisecond time scale.  Even now, experts are working on advanced methods for mining through the astronomically growing collection of “big data.”

Perhaps more important than the tools, though, is a change of perspective.  To reap the benefits of cloud economics, we need to shift our thinking from a static world to a dynamic one.  Working with a data stream from the cloud offers new opportunities, and challenges our conventional thinking in some interesting ways.  We may continue to buy our coffee by the cup from the local shop, but maybe soon we’ll have our data on tap.

Cloud Economics 4: Does Location Matter?

If you’ve been following the recent blogs, you’ll know the “L” in Joe Weinman’s C L O U D definition stands for location independence.  One of the five distinctive attributes of cloud computing, location independence means that you can access your data anywhere.  Location doesn’t matter in cloud economics.

Or does it?  Like many things in life, there is a trade-off.  Time is related to distance, even in cloud computing.  The farther you are from your data source, the longer it takes for the data to reach you.  And since timeliness has value, a better location should give better value.  So maybe location does matter after all.  The question is, how much?

Let’s put things into perspective by translating distance into time.  The calculated speed of data flowing through a fiber optic cable is about 125 miles per millisecond (0.001 seconds).  In real-world terms, since Chicago is located about 800 miles from New York City, it would take about 6.4 milliseconds for a “Hello world” message to traverse that distance.

As we discussed last week, for certain automated trading platforms that operate in the realm of microseconds (0.000001 seconds), 6.4 milliseconds is an eon of lost time.  These systems can make or lose millions of dollars at the blink of an eye.  For that reason you’ll find the serious players setting up shop right next door to their data center.  The rest of us, on the other hand, can pretty much remain in our seats, even for real-time cloud applications.

Why?  Well, first of all, the majority of industrial applications are already optimized for location.  Most SCADA systems are implemented directly inside a plant, or as close as physically practical to the processes they monitor.  Engineers who configure wide-area distributed systems are well aware of the location/time trade-offs involved, and take them into account in their designs.  Furthermore, they keep their mission-critical data communication self-contained, not exposed to the corporate LAN, much less to potential latencies introduced by passing data through the cloud.

Of course, a properly configured hybrid cloud or cloud-enhanced SCADA can separate the potential latencies of the cloud system from the stringent requirements of the core system.  What results is a separation between the deterministic response of the control system and the good-enough response time of the cloud system, which we have defined in a previous blog as “remote accessibility to data with local-like immediacy.

Another area where the location question arises is for the Internet of Things.  As we have seen, great value can be derived from connecting devices through the cloud.  These of course can be located just about anywhere, and most of them can send data as quickly as required.  For example, devices like temperature sensors, GPS transmitters, and RFID chips respond to environmental input that is normally several orders of magnitude slower than even a slow Internet connection.  Latencies in the range of even a few hundred milliseconds make little difference to most users of this data.  People don’t react much faster than that, anyway.

As we have already seen, user interactions with a cloud system have a time cushion of about 200 milliseconds (ms), the average human response time.  How much of that gets consumed by the impact of location?  Joe Weinmann tells us that the longest possible round trip message, going 1/2 way around the world and back, such as from New York to Singapore and back to New York, takes about 160 ms.  Not bad.  That seems to leave some breathing room.  But Weinmann goes on to point out that real-world HTTP response times vary between countries, ranging from just under 200 ms to almost 2 seconds.  And even within a single country, such as the USA, average latencies can reach a whole second for some locations.

However, a properly designed real-time cloud system still has a few important cards to play.  Adhering to our core principles for data rates and latency we recall that a good real-time system does not require round-trip polling for data updates.  A single subscribe request will tell the data source to publish the data whenever it changes.  With the data being pushed to the cloud, no round trips are necessary.  This elimination of the “response” cycle cuts the time in half.  Furthermore, a data-centric infrastructure removes the intervening HTML, XML, SQL etc. translations, freeing the raw data to flow in its simplest form across the network.

What does this do to our Singapore-to-New York scenario?  Does it now approach 80 ms?  It’s quite possible.  Such a system would have to be implemented and tested under real-world conditions, but there is good reason to believe that for many locations with modern infrastructures, data latency can be well under the magic 200 ms threshold.  To the extent that this is true, location really does not matter.

Cloud Economics 3: The Value of Timeliness

The other day at our local supermarket the line seemed to be going slower than usual.  When it came my turn to pay, I realized why.  The store had “upgraded” their debit card readers, and the new type of machine was agonizingly slow.  Instead of the usual one second to read my card and tell me to enter my PIN number, the thing took at least three whole seconds.  Then it took an additional couple of seconds to calculate and complete the transaction.

Now you might think I’m making a big deal about nothing, but don’t we all expect instant response these days?  There is an enormous value in timeliness, especially when you are providing a service.  The “single most important factor in determining a shopper’s opinion of the service he or she receives is waiting time,” according to Paco Underhill, CEO of Envirosell, in his book Why We Buy.  He continues, “… a short wait enhances the entire shopping experience and a long one poisons it.”  This insight was quoted and expanded on by Joe Weinman in his book Cloudonomics.

Wienmann points out the direct relationship between timeliness and the bottom line.  For example, he quotes a recent Aberdeen Group study showing that a one-second delay in load time for a web page causes an 11% drop in page views, which cascades into a 7% reduction in conversions (people taking action), and a 16% decrease in customer satisfaction.

Well below the one-second benchmark, new interactive abilities on the web compete to beat the speed of human reaction time.  Since I can type fairly quickly, I’m not a big fan of the Google pop-down suggestion box, but you have to admire the technology.  For the first letter you type, it goes out and finds a list of the most-searched words.  Each new letter modifies the list, completing a round-trip message to the server before you can even type the next letter.  How’s that for quick service?  No wonder I get frustrated at the supermarket.

Computer-to-computer communication operates at still finer magnitudes of scale.  For example, one of the colocation/cloud data center services provided by the New York Stock Exchange guarantees a round trip time for data at under 70 microseconds.  That’s just 0.00007 seconds.  This speed is highly valued by the traders who use the service, and they are willing to pay a premium for it.  It’s basic cloud economics.

Wonderful as all this is, Weinmann points out that there are limits to how quickly data can travel over a network.  Once you are already sending bits close to the speed of light through a fiber optic cable, the only other ways to speed things up are to move closer to your data source, and/or optimize your processing.  Whatever it takes to achieve it, faster reponse time means less wait, more satisfied customers, and more cash in the till.

Real-time cloud computing is all about the value of timeliness.  People who are watching and interacting with real-time processes expect at least the same kind of responsiveness as you get with Google.  When you click a button or adjust a gauge, the value should change immediately, not after 2 or 3 seconds.  All of this is possible when the core requirements for real-time computing are implemented, particularly those for high data rates and low latency.

How to move large quantities of rapidly changing data through the cloud, and allow meaningful user interaction in the 200 ms range of average human response time is a problem for the software engineers and techies to grapple with.  What is clear is that everyone—be it a customer waiting at the checkout counter, a manager viewing plant data, or a highly energized commodities trader—everyone at their own level knows the value of timeliness.