Cloud Economics 4: Does Location Matter?

If you’ve been following the recent blogs, you’ll know the “L” in Joe Weinman’s C L O U D definition stands for location independence.  One of the five distinctive attributes of cloud computing, location independence means that you can access your data anywhere.  Location doesn’t matter in cloud economics.

Or does it?  Like many things in life, there is a trade-off.  Time is related to distance, even in cloud computing.  The farther you are from your data source, the longer it takes for the data to reach you.  And since timeliness has value, a better location should give better value.  So maybe location does matter after all.  The question is, how much?

Let’s put things into perspective by translating distance into time.  The calculated speed of data flowing through a fiber optic cable is about 125 miles per millisecond (0.001 seconds).  In real-world terms, since Chicago is located about 800 miles from New York City, it would take about 6.4 milliseconds for a “Hello world” message to traverse that distance.

As we discussed last week, for certain automated trading platforms that operate in the realm of microseconds (0.000001 seconds), 6.4 milliseconds is an eon of lost time.  These systems can make or lose millions of dollars at the blink of an eye.  For that reason you’ll find the serious players setting up shop right next door to their data center.  The rest of us, on the other hand, can pretty much remain in our seats, even for real-time cloud applications.

Why?  Well, first of all, the majority of industrial applications are already optimized for location.  Most SCADA systems are implemented directly inside a plant, or as close as physically practical to the processes they monitor.  Engineers who configure wide-area distributed systems are well aware of the location/time trade-offs involved, and take them into account in their designs.  Furthermore, they keep their mission-critical data communication self-contained, not exposed to the corporate LAN, much less to potential latencies introduced by passing data through the cloud.

Of course, a properly configured hybrid cloud or cloud-enhanced SCADA can separate the potential latencies of the cloud system from the stringent requirements of the core system.  What results is a separation between the deterministic response of the control system and the good-enough response time of the cloud system, which we have defined in a previous blog as “remote accessibility to data with local-like immediacy.

Another area where the location question arises is for the Internet of Things.  As we have seen, great value can be derived from connecting devices through the cloud.  These of course can be located just about anywhere, and most of them can send data as quickly as required.  For example, devices like temperature sensors, GPS transmitters, and RFID chips respond to environmental input that is normally several orders of magnitude slower than even a slow Internet connection.  Latencies in the range of even a few hundred milliseconds make little difference to most users of this data.  People don’t react much faster than that, anyway.

As we have already seen, user interactions with a cloud system have a time cushion of about 200 milliseconds (ms), the average human response time.  How much of that gets consumed by the impact of location?  Joe Weinmann tells us that the longest possible round trip message, going 1/2 way around the world and back, such as from New York to Singapore and back to New York, takes about 160 ms.  Not bad.  That seems to leave some breathing room.  But Weinmann goes on to point out that real-world HTTP response times vary between countries, ranging from just under 200 ms to almost 2 seconds.  And even within a single country, such as the USA, average latencies can reach a whole second for some locations.

However, a properly designed real-time cloud system still has a few important cards to play.  Adhering to our core principles for data rates and latency we recall that a good real-time system does not require round-trip polling for data updates.  A single subscribe request will tell the data source to publish the data whenever it changes.  With the data being pushed to the cloud, no round trips are necessary.  This elimination of the “response” cycle cuts the time in half.  Furthermore, a data-centric infrastructure removes the intervening HTML, XML, SQL etc. translations, freeing the raw data to flow in its simplest form across the network.

What does this do to our Singapore-to-New York scenario?  Does it now approach 80 ms?  It’s quite possible.  Such a system would have to be implemented and tested under real-world conditions, but there is good reason to believe that for many locations with modern infrastructures, data latency can be well under the magic 200 ms threshold.  To the extent that this is true, location really does not matter.

Cloud Economics 3: The Value of Timeliness

The other day at our local supermarket the line seemed to be going slower than usual.  When it came my turn to pay, I realized why.  The store had “upgraded” their debit card readers, and the new type of machine was agonizingly slow.  Instead of the usual one second to read my card and tell me to enter my PIN number, the thing took at least three whole seconds.  Then it took an additional couple of seconds to calculate and complete the transaction.

Now you might think I’m making a big deal about nothing, but don’t we all expect instant response these days?  There is an enormous value in timeliness, especially when you are providing a service.  The “single most important factor in determining a shopper’s opinion of the service he or she receives is waiting time,” according to Paco Underhill, CEO of Envirosell, in his book Why We Buy.  He continues, “… a short wait enhances the entire shopping experience and a long one poisons it.”  This insight was quoted and expanded on by Joe Weinman in his book Cloudonomics.

Wienmann points out the direct relationship between timeliness and the bottom line.  For example, he quotes a recent Aberdeen Group study showing that a one-second delay in load time for a web page causes an 11% drop in page views, which cascades into a 7% reduction in conversions (people taking action), and a 16% decrease in customer satisfaction.

Well below the one-second benchmark, new interactive abilities on the web compete to beat the speed of human reaction time.  Since I can type fairly quickly, I’m not a big fan of the Google pop-down suggestion box, but you have to admire the technology.  For the first letter you type, it goes out and finds a list of the most-searched words.  Each new letter modifies the list, completing a round-trip message to the server before you can even type the next letter.  How’s that for quick service?  No wonder I get frustrated at the supermarket.

Computer-to-computer communication operates at still finer magnitudes of scale.  For example, one of the colocation/cloud data center services provided by the New York Stock Exchange guarantees a round trip time for data at under 70 microseconds.  That’s just 0.00007 seconds.  This speed is highly valued by the traders who use the service, and they are willing to pay a premium for it.  It’s basic cloud economics.

Wonderful as all this is, Weinmann points out that there are limits to how quickly data can travel over a network.  Once you are already sending bits close to the speed of light through a fiber optic cable, the only other ways to speed things up are to move closer to your data source, and/or optimize your processing.  Whatever it takes to achieve it, faster reponse time means less wait, more satisfied customers, and more cash in the till.

Real-time cloud computing is all about the value of timeliness.  People who are watching and interacting with real-time processes expect at least the same kind of responsiveness as you get with Google.  When you click a button or adjust a gauge, the value should change immediately, not after 2 or 3 seconds.  All of this is possible when the core requirements for real-time computing are implemented, particularly those for high data rates and low latency.

How to move large quantities of rapidly changing data through the cloud, and allow meaningful user interaction in the 200 ms range of average human response time is a problem for the software engineers and techies to grapple with.  What is clear is that everyone—be it a customer waiting at the checkout counter, a manager viewing plant data, or a highly energized commodities trader—everyone at their own level knows the value of timeliness.

Cloud Economics 2: Definitions

Like any good mathematician, Joe Weinman in his book Cloudonomics lays out some definitions right up front.  He chooses to define the concept of “cloud” in cloud computing in a way that brings out five essential attributes of cloud economics that are common to other cloud-like systems in business and life in general.  To make it easy to remember he gives his definition as a mnemonic: C L O U D.

Let’s see how these five attributes of any cloud system fit in with our understanding of real-time cloud computing:

C – Common infrastructure – refers to the ability to share resources.  A city park is like a cloud in that it can meet the needs of millions of apartment dwellers for some quality outdoor space—gardens, walkways, playgrounds, and sports fields.  Nobody feels overly crowded because they don’t all use the park at the same time, or in the same way.

As Wienman explains in detail later on in the book, non-cloud computing resources are often underutilized, which becomes a cost.  For example, some industrial applications require their software to run alone, on a separate server.  As the number of this kind of application grows, the waste of resources increases.  Where possible, using virtual machines is one way to share the resources of a single server to reduce this kind of waste.  This approach to sharing infrastructure is often used in cloud systems, as well as private systems.

L – Location independence – means that the service is available pretty much everywhere.  You might not think of a fast-food franchise as a cloud service provider, but in a sense it is similar.  Just as you can get order-in or take-out service from your favorite burger outlet in many places around the country or even the world, so also can you access the cloud from practically any location.

People relaxing in a city park.The value of location independence for real-time systems is just beginning to be realized.  For decades data from industrial systems has been tightly locked down, behind firewalls and physically isolated systems.  But now, perhaps to the dismay of engineers and system integrators who rely on isolation for security reasons, upper management in many companies is waking up to the value of accessing that data from anywhere.

Of course, there is always a need to keep raw process data secure and free from interference, but advanced methods of keeping firewalls closed and permitting read-only access can help bring key real-time performance metrics to analysts and decision makers in the office, at home, or on the road.

At the same time, many embedded systems once lacked the power or connectivity to put their data online.  With the advent of the Internet of Things connecting cars, appliances, remote sensors, and a host of other devices directly to the Internet, we are witnessing a huge growth and interest in accessing live data from all kinds of sources, independent of location.

O – Online accessibility – is the availability of service via a network or the Internet.  Every service needs some form of access.  A restaurant needs an eating area, a movie theater needs seats and a view of the screen, a radio show needs transmitters and receivers.  As Wienman sums it up: “Without networks, there is no cloud.”  Real-time cloud systems can function well on private networks, and in many cases access to the Internet and public clouds will provide additional value.

U – Utility pricing – like the Water Works and Electric Company in the game of Monopoly, utility pricing means you only pay for what you use—be it water, electricity or computing power.  Usually this aspect of cloud computing goes hand-in-hand with on-demand resources.

D – on-Demand resources – the ability to bring in additional resources, or remove extra ones, to cope with variable demand.  For example, your house has plenty of space for your family and an occasional guest, but on special occasions like a big wedding you may need to engage the services of hotels or restaurants.

The flexibility to respond to market fluctuations is a real boon for retail and consumer-oriented companies who may see significant peaks and valleys of seasonal or irregular demand.  In our experience, most industrial and embedded real-time systems don’t undergo such large variations in demand for computing resources.  However, for systems too small or too dispersed to justify a dedicated, in-house SCADA system, (such as mentioned in our SCADA for the Masses discussion), on-demand resources and utility pricing may help make the cloud a viable solution.

Given the above C L O U D definitions, the economic value of any cloud computing system, real-time or not, depends on a number of variables and circumstances.  We need to consider these in their appropriate context to determine how real-time systems can benefit.

Cloud Economics 1: A Vision

For the past few months we’ve been looking at the technical side of real-time cloud computing.  We’ve touched on some of the requirements for supporting real-time data communications on the cloud, looked at how SCADA and embedded systems might benefit from accessing the cloud, and even considered how the term “real time” may be best applied to cloud computing.

Going forward, I thought it might be a good idea to switch gears a bit, and take a deeper look at the business and economic side of cloud computing, and see how the latest thinking about cloud economics may or may not apply to real-time applications.

A new book, Cloudonomics, by Joe Weinman, Senior Vice President of Cloud Services and Strategy at Telx, gives a profound yet accessible overview of the business value of cloud computing—in other words, cloud economics.  Among other things, the book’s cover blurb says, “Weinman drills down past the hype and hysteria, the myths and misconceptions, to uncover the fundamental principles underlying how the cloud works, how it’s used, and how it will evolve in a business context.

With the vision of a mathematician, Weinman strips away the non-essential features of the cloud and breaks it down into its basic elements and principles.  At that level, he can demonstrate how “cloudy” ideas and concepts have been used for centuries.  For example, he shows the similarities between cloud computing and the transportation and lodging infrastructure of ancient Rome, complete with multi-protocol wide-area networks, pay-per-use resources, value-added services, regulatory agencies, security tokens, branding, advertising, and more.Coins for the cloud.

Weinman uses lots of real-world examples to show how we find cloud concepts in every facet of life, such as hotels, taxicabs, and movie theaters.  At the same time, he introduces some simple mathematical theories and models that sometimes uphold and sometimes contradict much of the conventional wisdom that has grown up around cloud computing.

Through it all, he strives to adhere to three goals: 1) present a multidisciplinary view from a number of fields of economics, mathematics, natural sciences, and system dynamics; 2) plant seeds of ideas in areas related to cloud computing, which may be cultivated and developed by others; and 3) take an evergreen approach, where the concepts are so fundamental and universal that they will serve to inspire research and application in business for many years to come.

Although I haven’t read it exhaustively, I’ve not yet seen much mention of the application or value or real-time systems in the cloud.  This is not surprising, as this topic is still on the distant horizon for many leaders of thought.  Or, it could be that what applies to cloud computing in general also applies to real-time cloud computing.

This raises an interesting question: Is there any significant difference between the economics of the more familiar cloud systems of business and consumer applications, and the less-well-known real-time cloud systems for industrial and embedded applications?  We know there are some unique technical requirements.  Is there a fundamentally different business model for real-time cloud?

In the weeks to come we’ll take a look at some of the ideas presented by Weinman in Cloudonomics, and see how they may or may not apply to the special case of real-time cloud computing.

SCADA for the Masses?

In a recent Linkedin discussion among the SCADA Professionals group, Manny Romero, Manager of Madison Technologies Industrial IT&C Division in Sydney, Australia suggested that the cloud could provide “SCADA to the masses.”  This idea sounds interesting, so I thought we might take a closer look.

The premise is that the relationship between traditional SCADA and cloud-enhanced services like M2M and others are not necessarily mutually exclusive.  Perhaps it is a false dichotomy.  Suppose you don’t have to choose?  Maybe you can enjoy the benefits of both.

Romero suggests that we can compare the controversy of SCADA vs. the Cloud to the early 80s when the PC begain gaining popularity for business applications.  While PC advocates were eagerly announcing the death of the mainframe, many in the traditional computing world sneered at the lightweight upstarts, saying that nothing as rinky-dink as a PC could possibly replace the mainframe.

As it turns out, the mainframe didn’t get replaced.  Instead, PCs put tools like spreadsheets and relational databases within reach of individual managers and office staff.  And they opened up new application spaces in areas like education, personal publishing, gaming, and home finances.  Then, with the advent of the Internet, personal computing expanded into email, web surfing, online videos, and more.  In this way, the PC opened the door to “computing for the masses”.

SCADA for the massesThis is what cloud computing may do for SCADA, according to Romero.  He believes that the SCADA systems currently in use will probably continue in their current form for many years to come, but at the same time, cloud-enabled systems may become more common.  How so?

The first thing that comes to mind is industrial and commercial applications that can use some SCADA functionality, but do not need or cannot afford a full-blown SCADA implementation.  Some may be getting by with a web portal and email/SMS messaging, and yet many would benefit from a more sophisticated system, as long as staffing and equipment costs were minimal.  Cloud-enabled SCADA could be a way to meet that need.

What about beyond the world of industrial applications?  Just as the PC revolution brought computing to the masses, could cloud computing bring SCADA to the masses of non-industrial users?  What is SCADA, after all?  Supervisory Control and Data Acquisition.  There is nothing in that definition that limits SCADA to factories, pipelines, and wind turbines.

The rapidly-growing Internet of Things is all about data access, and often includes forms of supervisory control.  As the number of connected devices continues to mushroom, there will be more demand for connectivity options from both the public and private sectors.  Home appliances and HVAC systems, cars and trucks, vending machines, security cameras, and many other types of consumer goods will be increasingly sending data and receiving supervisory control from ordinary citizens.  This could eventually be seen as “SCADA for the masses”.

Will these trends continue?  We won’t have to wait too long to find out.  Five or ten years from now people may take these ideas for granted.  Perhaps in another ten years after that someone will need to research to find out where exactly the term “SCADA for the masses” was first used.  As far as I’m concerned, it was from Manny Romero on Linkedin, in August 2012.

SCADA Professionals Weigh In

For the past few weeks there has been a lively discussion on the SCADA Professionals group of LinkedinSalman Ijazi, an oil and gas professional in the Dallas/Fort Worth area posed the question: “When you think of a cloud based SCADA/monitoring system, what issues come to your mind?”

This topic elicited a wealth of comments from a wide spectrum of engineers, system integrators, managers, and other leaders of thought.  Brian Chapman, SCADA Software Engineer at Schneider Electric was the first and most frequent responder.  His comments ranged from comparisons of the human brain and SCADA systems to detailed analysis of the layered design in a water chlorination system.  Overall, he doesn’t see many possibilities for SCADA on the cloud.

Several respondents agreed with Brian, and some were quite adamant.  Zane Spencer, Automation & Controls Project Manager at MPE Engineering said, “The thought of a cloud-based SCADA system makes me shudder,”  while Earl Vella, Senior Systems Developer at Water Services Corp. in Malta said simply, “SCADA and cloud must never meet.”

Jake Brodsky, Control Systems Engineer at WSSC emphasized the importance of not putting an entire SCADA system on the cloud, pointing to the primary concerns of security, potential latencies in data throughput, and reliability.  He questions the notion of taking “the same old software you’ve been using,” putting it on a cloud platform, and then expecting that you will magically get better service.

In response, others point out that although we should not consider building a SCADA system on a cloud server, cloud computing may still offer significant value to traditional and future SCADA systems.

Jake Hawkes, a platform manager in Calgary suggested that the current practice of outsourcing SCADA systems might lead to SCADA in the cloud as a next logical step.  Ruslan Fatkhullin, CEO of Tesla in Russia, mentioned the advantages of OPC UA for connecting sensors and field systems to cloud servers.  J-D Bamford, CRM/SCADA Security Engineer at Cimation in Denver, pointed out that the cloud can be useful for rapid development of systems serving distributed facilities, while at the same time, traditional HMI developers are already offering web-based solutions for mobile phone and desktop dashboards.

An important distinction was touched on by John Kontolefa, Professional Engineer at NYPA, and seconded by others: not to confuse SCADA systems with DCS (Distributed Control Systems).  There seems to be a consensus among most group members that DCS functionality like automatic, real-time, closed-loop control of critical processes does not belong on the cloud, whereas open-loop SCADA functionality such as simple monitoring and inputs of non-real-time data like adding recipes or fine tuning a process might do fine on a cloud-based system.

Summing up, Salman Iljazi, who posed the initial question, pointed out the value in the oil and gas industries of performing some SCADA functions in the cloud.  The geographical and other constraints that they operate under bring out certain advantages of using the cloud: ease of deployment, maintenance, and expansion, coupled with low infrastructure requirements.  He mentioned applications such as pipeline monitoring, alarm management, hydrocarbon reporting, and well pad monitoring, and proposed that even high security environments such as banking, e-commerce and health systems management may benefit from SCADA functionality in the cloud.

For me, personally, the most intriguing possibility was mentioned subsequently by Manny Romero, Manager of Madison Technologies Industrial IT&C Division in Sydney.  He suggested that the cloud could provide SCADA to the masses.”  What does that mean?  We’ll talk about it next week.