Is Your Country Cloud-Ready?

Just as the clouds in the sky have no geographic limits and glide over all borders, we might hope that cloud computing would also be an international phenomenon. At the very least, as the various countries around the world go increasingly digital, cloud computing and real-time data interconnectivity should begin to take on a greater significance worldwide. The question then comes to mind: Which countries are best prepared for cloud computing?

A few weeks ago the BSA Global Cloud Scorecard was released, the first report of its kind. The BSA (Business Software Alliance) positions itself as an advocate for the software industry, and its membership is made up of many leading firms such as Micrsoft, Apple, Oracle, Intel, Siemens, Sybase, and Dell. The Global Cloud Scorecard is an attempt to rate the top 24 ICT (Information and Communication Technology) countries in the world in terms of their readiness for cloud computing.

GlobeFlagsThe results are interesting, indeed surprising in some ways. Although we would expect the more “developed” countries to be more advanced in their ability to support cloud computing, “troubling obstacles emerge when you examine the lack of alignment in the legal and regulatory environments in many of those advanced countries,” according to the report. At the same time, the strong desire for ICT in advancing countries like China, India, and Brazil doesn’t necessarily make them ideal environments for cloud computing either. Each country has its own legal dynamic that plays out in unique ways.

The 24 countries were evaluated in three broad areas:

1. The legal environment that ensures privacy and security, defines and restricts cybercrime, and upholds the rights of intellectual property.

2. Policies and support for international standards, e-commerce, and free trade.

3. ICT readiness of the general infrastructure, and policies for broadband Internet support.

The printed report provides the detailed scorecard for each country, by category, as well as some graphs for making quick comparisons. The website also features a page in which you can get a verbal summary of the situation, country by country.

Some of the trends that caught my eye included:

Japan is at the top of the chart, as the country is active in cybercrime treaties, IP laws, and international standards. They also have high broadband penetration, and plan to provide access to 100% of households by 2015.

Most European countries scored reasonably well. Germany is near the top, but may drop in the standings if they begin interpreting laws to restrict the flow of data across borders.

The USA is a leader in cybercrime laws, privacy protection good at the individual level, but inconsistent at the state level. The country has high Internet use, but broadband coverage is not consistent.

China, India, Brazil, and Thailand all exhibit a strong and growing interest in ICT, but some significant gaps in privacy protection and cybercrime legislation.

Although there may be a few setbacks, my guess is that all of the countries in the report will have made substantial improvements in their scores in the next few years, and there may be new ones added. We look forward to seeing next year’s report.

Smart Computing in Real Time

We’re hearing plenty of talk these days about smart phones, smart homes, and smart cities.  “Smart” in this sense means adding computing power to our phones, houses, or public facilities, and connecting them to a network or the Internet.  So in that context, what could “smart computing” possibly mean?  How do computers get smart?  And what does smart computing have to do with real-time cloud computing, if anything?

A few years ago Andrew H. Bartels wrote a white paper for Forbes titled Smart Computing Drives The New Era of IT Growth.  In this paper Bartels defines what he means by “smart computing” as “a new generation of integrated hardware, software, and network technologies that provide IT systems with real-time awareness of the real world and advanced analytics to help people make more intelligent decisions about alternatives and actions that will optimize business processes and business balance sheet results.

Can we simplify that a bit?  How about saying that just as something in the real world gets “smart” by connecting it to a computer, computers get “smart” by connecting them to the real world, which ultimately helps us to make better decisions.

As we would expect in a white paper from Forrester Research, there are some well-thought-out projections on where this trend might take us.  It states that smart computing is the next big wave, a fourth wave coming after mainframe, personal, and networked computing.  Does that sound familiar?  We’ve heard people saying pretty much the same thing about cloud computing.  This should not be surprising, since Bartels identifies cloud computing as “one of the underpinnings of smart computing.

What strikes me is how much benefit smart computing can gain from real-time cloud capabilities.  Consider this list of the Five A’s of Smart Computing that Bartels suggests:

Robots_smallAwareness means connectivity to the world, pretty much as we’ve seen in the Internet of Things – sensors, embedded chips, video, and so on.  Bartels says: “Unified communications technologies such as third-generation (3G) wireless networks will transport this data from these client devices back to central servers for analysis.”  In many scenarios, the closer to real time that the data transport takes place, the more useful the information will be.

Analysis is done using standard business intelligence tools, and Bartels points out the value of feeding real-time data into these tools: “Businesses and governments have already been using these analytical tools … But now, they will be deployed against the real-time data being transmitted from the new awareness devices.

Alternatives refers to the decision-making process: evaluating alternatives and making decisions.  Bartels foresees a need for a significant increase in data transfer rates to keep pace with the real world in real time.  “The basic function of rules engines and workflow will stay constant — seismic leaps will be necessary in the data flow and analytical inputs in a world of vastly expanded real-time awareness.

Actions are based on the results of analysis, either automatic or with human intervention.  In either case, Bartels suggests: “These actions will be executed through integrated links to the appropriate process applications.”  Real-time cloud systems can provide two-way data communication to support control functionality when required.

Auditability is a feedback system to ensure that the action has taken place, complies with legal regulations and company policies, and also provides some way to evaluate for improvement.  “Using data on activity at each stage, record what happened and analyze for purposes of compliance and improvement.”  A real-time cloud system should be readily able to support that capability.

To sum up, new technologies are necessary to support smart computing.  These include the ability to capture data from the real world and send it in real time for high-speed analysis and feedback.  This is what real-time cloud computing is all about.

Sidestepping Delays

There is a famous game in the business world called the Beer Game.  Developed at MIT’s Sloan School of Management, this game gives a taste of reality to novice and experienced managers alike.  In addition to lessons in management, the Beer Game illustrates an interesting similarity between business systems and industrial processes, and suggests to me how real-time data cloud services may be able to help business sidestep costly delays.

In the Beer Game, players representing retailers, wholesalers, distributors and producers of beer are responsible for keeping up with customer orders.  Everything goes pretty smoothly until there is a sudden increase in demand.  Due to a built-in request-and-response time delay at each level of supply, it takes a while for the increased orders to reach the factory, and still longer for the new supplies of beer to reach the retailers.

The delay in supply shipments causes a temporary shortage for retailers, so they keep sending in large orders for beer.  Eventually, when truckloads of beer finally start to arrive, their supplies overshoot demand, and the retailers now have to cut orders dramatically.  But the beer keeps coming.  Oscillations between supply and demand ensue, creating customer dissatisfaction, wasted resources, and loss of profit.  Hard feelings often arise between the game players at the various levels in the supply chain, as each blames the others for the losses.

The problem is, most players simply keep ordering beer as long as their customers or downstream distributors keep clamoring for it, not realizing the mistake until it is too late.  “If there were no time delays, this strategy would work well,” said MIT Professor John D. Sterman, who has run the game for many years.  So, in a way, the culprit here is the time lag.  Let’s see how a real-time approach might change the picture.

In the real-time world of industrial control, this is a familiar scenario.  If you have, for example, an oven running at a set temperature, and then turn a dial to raise the temperature, it takes a bit of time for the system to respond.  If it is tuned well, the heating mechanism will quickly bring the oven up to the newly set temperature, and maintain that setting.  If not, it may respond slowly, and possibly overshoot and undershoot the setting a few times until it finally stablizes on the new temperature.

This type of behavior can be plotted on a graph.  Here are a few examples:


In these images, the red line represents the newly set temperature, the green line is the ouput from the heating mechanism, and the blue line is the actual temperature inside the oven.

Poor Response shows delayed communication and overreaction.  Notice that the heating mechanism output (green line) doesn’t start decreasing until the oven temperature hits the proper setting.  Poor feedback between the actual temperature in the oven and the heating mechanism causes a number of oscillations.

So-so Response is better, but there is still some overreaction.

Quick Response is the best.  A combination of an immediate and strong initial response with a tightly coupled feedback loop between the heating mechanism and the oven temperature means that the new setting is achieved rapidly, with minimum waste.

How does this apply to the Beer Game and the business world?  Using real-time cloud technology, it should be possible to connect all the data related to the beer sales, ordering, distribution and production into a single, seamless flow.  Imagine if each player in the game had a window into actual production figures and supply inventories at every level, updated in real time.  The factory could see immediate spikes in demand and retailers could gauge supply levels, while distributors and wholesalers could monitor the flow of orders and shipments up and down the supply chain.

Of course, there will always be time constraints in the actual beer shipments.  But that doesn’t mean we have to settle for frustrations originating in the paper-based systems of the last century.  With a real-time cloud approach, many “inevitable” delays can simply be sidestepped.

Data-Powered Forecasting

The results of the recent presidential election in the USA came as a surpise to some.  Most of the pundits and forecasters on the Republican side had predicted a victory for their candidate, while those on the Democratic side were quite certain theirs would win.  A “Pundarts” graphic at the Slate.com website illustrates the relative success of these two groups in predicting the final outcome.

Right in the bullseye was Nate Silver, blogger for the New York Times, now hailed as the “golden boy of electoral statistics.”  He was spot-on this year, as he was four years ago.  No wonder that sales of his book The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t jumped some 850% the day after the election.

According to Silver, in spite of how important forecasting is to our daily lives, we are remarkably poor at it.  Those who are most successful tend to be more modest, less ideological, and to rely on empirical evidence.  The key is to be able to successfully sift through a sea of noise to detect the signal.  Much of that noise can be internal, coming from the pundits themselves in the form of personality traits, subconsious biases, and pride in being an expert.

One arrow hits the target, the rest miss.While we don’t have much control over the human nature of forecasters, real-time cloud computing opens new possibilities for gathering empirical evidence.  Those currently using real-time systems already understand the value of real-time data.  Connecting to the cloud provides more depth and reach for these systems, as hard data and empirical evidence become more readily available, and up-to-the-second.  The growing availability of data, broad-based and timely, is moving us out of the realm of supposition into higher levels of certainty.

Take Nate Silver’s own success, for example.  He didn’t conjure up an imaginary all-knowing genie, or shake a Magic 8 Ball.  He certainly didn’t try to advance his own “expert” opinion.  He simply looked at the average of a number of polls.  He worked with the hard facts, considered historical precedence, and made reasonable guesses with a stated level of probability.  In addition to these, he was willing and able to adapt to rapidly changing conditions.

If you read his blog entries and other reports, you will see that as the polls changed over time, Silver changed his predictions as quickly as possible.  He was recalculating up until the day of the election, and even during the hours that the returns were coming in he was posting updates to his blog.

All successful forecasting, be it for the weather, the stock market, or business planning, ultimately relies on hard data.  In our ever-accelerating world we are becoming increasingly aware of the need for timeliness of that data.  The more recent the data, the better the forecast.  And in our growing interconnectedness, we are discovering the value of having full access to that data anywhere, any time.

The availability and speed of the incoming data are constantly increasing.  Where will all of this end up?  Will the past, present, and future eventually get compressed into real time, making data spontaneously available everywhere?  Are we ready to consider the possibility of going beyond the guesswork of forecasting, to realize a new reality, the certainty of now?

Not quite yet.  We’ll take up that topic soon.

Cloud Economics 5: Data on Tap

How is cloud computing like buying a cup of coffee?  Joe Weinman uses a clever analogy of purchasing a cup of coffee to explain some of the factors that go into providing faster, and therefore more valuable, cloud services.  I thought it would be fun to see how his coffee-buying model may or may not fit with the economics of real-time cloud computing.

To speed up the process of getting a cup of coffee from your local coffee shop (and data from your cloud service), Wienmann suggests several options for the service provider and customer:

  • Optimize the process by streamlining and reducing the number of tasks that the coffee shop staff need to carry out to prepare a cup of coffee.  In cloud computing, this equates to optimizing algorithms and implementing other processing efficiencies on the server.
  • Use more resources, such as hiring more staff behind the counter to make and pour coffee, so that multiple customers can be served simultaneously.  Cloud service providers do something similar when they provide parallel processing for large computing tasks.
  • Reduce latency by opening a coffee shop closer to the customer’s office, or by the customer moving closer to the shop.  We see this playing out in certain situations when cloud customers who require ultra-high-speed performance actually move their physical location to be closer to the data center.
  • Reduce round trips for coffee by picking up a whole trayful of coffees on each trip to the shop.  In the similar way, it is sometimes possible to send multiple requests or receive multiple replies in a single transaction with the cloud.

In addition to these four options, Wienmann suggests a rather drastic alternative: Eliminate the need for the transaction altogether.  In the analogy, that would mean stop buying coffee from the shop, and either make it yourself at the office or do without.  This equates to not using cloud services at all.

Data flowing from a faucet.What is the real-time approach?  Data on tap.  Instead of making round-trips to the coffee shop every few hours or days, just pipe the coffee directly to the office, and let it flow past your desk, always hot and fresh, ready to be scooped up and savored. Just dip your cup into the stream.

A key conceptual shift takes place when we implement real-time cloud computing.  There is no need for transactions to receive data.  The cycle of request-process-reply gets replaced by an always-on stream of data.  Thus there is minimal delay.

In the physical world this would be considered wasteful.  I can see my grandfather, who lived through the Great Depression, recoiling in horror at the thought of those gallons per minute of undrunk coffee going down the drain somewhere.  But real-time data gets generated fresh all the time, and most of it quickly vaporizes into thin air anyway.  Best to put it into the hands of someone who can use it.  Data on tap means there is actually less waste.

But how, some may ask, can you possibly contain it?  How do you get a grip?  How can you analyze a moving target?  What if a highly valuable factoid escapes my cup and flows off into oblivion?

Different tools and skills are necessary for working with streaming data.  High-speed, in-line analytics that can keep up with the incoming flow will help decision-makers respond to ever-changing conditions.  Super-efficient real-time data historians that capture every event, large or small, will provide quick access to minute details occurring on a millisecond time scale.  Even now, experts are working on advanced methods for mining through the astronomically growing collection of “big data.”

Perhaps more important than the tools, though, is a change of perspective.  To reap the benefits of cloud economics, we need to shift our thinking from a static world to a dynamic one.  Working with a data stream from the cloud offers new opportunities, and challenges our conventional thinking in some interesting ways.  We may continue to buy our coffee by the cup from the local shop, but maybe soon we’ll have our data on tap.

Cloud Economics 4: Does Location Matter?

If you’ve been following the recent blogs, you’ll know the “L” in Joe Weinman’s C L O U D definition stands for location independence.  One of the five distinctive attributes of cloud computing, location independence means that you can access your data anywhere.  Location doesn’t matter in cloud economics.

Or does it?  Like many things in life, there is a trade-off.  Time is related to distance, even in cloud computing.  The farther you are from your data source, the longer it takes for the data to reach you.  And since timeliness has value, a better location should give better value.  So maybe location does matter after all.  The question is, how much?

Let’s put things into perspective by translating distance into time.  The calculated speed of data flowing through a fiber optic cable is about 125 miles per millisecond (0.001 seconds).  In real-world terms, since Chicago is located about 800 miles from New York City, it would take about 6.4 milliseconds for a “Hello world” message to traverse that distance.

As we discussed last week, for certain automated trading platforms that operate in the realm of microseconds (0.000001 seconds), 6.4 milliseconds is an eon of lost time.  These systems can make or lose millions of dollars at the blink of an eye.  For that reason you’ll find the serious players setting up shop right next door to their data center.  The rest of us, on the other hand, can pretty much remain in our seats, even for real-time cloud applications.

Why?  Well, first of all, the majority of industrial applications are already optimized for location.  Most SCADA systems are implemented directly inside a plant, or as close as physically practical to the processes they monitor.  Engineers who configure wide-area distributed systems are well aware of the location/time trade-offs involved, and take them into account in their designs.  Furthermore, they keep their mission-critical data communication self-contained, not exposed to the corporate LAN, much less to potential latencies introduced by passing data through the cloud.

Of course, a properly configured hybrid cloud or cloud-enhanced SCADA can separate the potential latencies of the cloud system from the stringent requirements of the core system.  What results is a separation between the deterministic response of the control system and the good-enough response time of the cloud system, which we have defined in a previous blog as “remote accessibility to data with local-like immediacy.

Another area where the location question arises is for the Internet of Things.  As we have seen, great value can be derived from connecting devices through the cloud.  These of course can be located just about anywhere, and most of them can send data as quickly as required.  For example, devices like temperature sensors, GPS transmitters, and RFID chips respond to environmental input that is normally several orders of magnitude slower than even a slow Internet connection.  Latencies in the range of even a few hundred milliseconds make little difference to most users of this data.  People don’t react much faster than that, anyway.

As we have already seen, user interactions with a cloud system have a time cushion of about 200 milliseconds (ms), the average human response time.  How much of that gets consumed by the impact of location?  Joe Weinmann tells us that the longest possible round trip message, going 1/2 way around the world and back, such as from New York to Singapore and back to New York, takes about 160 ms.  Not bad.  That seems to leave some breathing room.  But Weinmann goes on to point out that real-world HTTP response times vary between countries, ranging from just under 200 ms to almost 2 seconds.  And even within a single country, such as the USA, average latencies can reach a whole second for some locations.

However, a properly designed real-time cloud system still has a few important cards to play.  Adhering to our core principles for data rates and latency we recall that a good real-time system does not require round-trip polling for data updates.  A single subscribe request will tell the data source to publish the data whenever it changes.  With the data being pushed to the cloud, no round trips are necessary.  This elimination of the “response” cycle cuts the time in half.  Furthermore, a data-centric infrastructure removes the intervening HTML, XML, SQL etc. translations, freeing the raw data to flow in its simplest form across the network.

What does this do to our Singapore-to-New York scenario?  Does it now approach 80 ms?  It’s quite possible.  Such a system would have to be implemented and tested under real-world conditions, but there is good reason to believe that for many locations with modern infrastructures, data latency can be well under the magic 200 ms threshold.  To the extent that this is true, location really does not matter.