Posts

Down-to-Earth Cloud: Fog Computing on Edge Devices

When a cloud comes to earth–hitting a mountain, or rolling in at ground level–we call it fog. In the same way, cloud computing conducted at the local level is sometimes referred to as “fog” computing or “edge” computing. Wikipedia defines edge computing as “pushing the frontier of computing applications, data, and services away from centralized nodes to the logical extremes of a network.” In other words, fog or edge computing brings data processing out of the clouds and down to earth.

In a recent blog, New in IIoT: Fog Computing Leverages Edge Devices and the Cloud, Al Presher describes how edge devices are being used in commercial and industrial applications to provide computing power to interface between the real world and the cloud. Putting computing power as close as possible to the data collection, detection or control can mean quicker response, and more efficient and meaningful data collection.

For example, a simple device might send a simple message “I’m switched on and working” every second. A control system that interacts with the device needs that message the first time, but not every second. Maybe it could use an hourly or daily update as a status report, but sending the message more frequently would just waste resources and bandwidth. With the thousands or millions of such devices that the IoT promises, we need a way to send only meaningful messages.

This is where edge computing comes in. A program on the device can throttle the messages down to once an hour, or once per day, or whatever. It can read and interpret messages such as “I’m switched off” or “I’m not working properly“, and forward them immediately. For more sophisticated devices, an edge computing solution could send ordinary status messages when things are normal, and then open a real-time data flow during any abnormal conditions, so that every single data change, no matter how brief, can be collected and recorded.

In addition to decreasing data volume, edge computing can also reduce the amount of processing done on the receiving end of the data. For example, unit conversions, linear transformations, and simple analytical functions can be run on the data before it gets sent to the cloud. Spread out over hundreds or thousands of devices, this relatively simple, decentralized processing can translate into significant cost savings.

The Skkynet Embedded Toolkit supports edge computing in several ways on devices where it is used. It has a built-in command set, and a scripting language specifically designed for mission-critical industrial applications, with a wide range of functions for interacting with the real-time data as it flows through the system. Being able to access each data point in the system, it can support both monitoring and control functionality, as needed.

What will be the impact of fog or edge computing? At this point it is difficult to predict, exactly. However, it seems that for industrial systems, edge computing can provide many of the benefits of a SCADA (Supervisory Control And Data Access) system, for a much smaller up-front and ongoing investment. By plugging edge devices into an existing data communications infrastructure like SkkyHub, much of the heavy lifting for data monitoring and supervisory control has already been done.

ExxonMobil Seeks Open Automation Solutions

At the most recent ARC Industry Forum in Orlando, ExxonMobil announced that they are not satisfied with business as usual when it comes to industrial automation, and they are looking for something far superior to what is currently being offered.  On January 14, 2016, ExxonMobil announced that they had awarded a contract to Lockheed Martin to serve as the systems integrator in the early stage development of a next-generation open and secure automation system for process industries.  Lockheed Martin is tasked to seek out the architecture and tools needed for an “open, standards-based, secure and interoperable control system” that can be seamlessly integrated with existing facilities, as well as new and future systems.  ExxonMobil wants the hardware and software components to be commercially available and able to function in all current DCS markets.

Rather than simply replace their aging systems with the current state of the art, which is expensive, inflexible, and closed, ExxonMobil wants to leverage new, open, IoT, wireless, and cloud technologies to cut costs, enhance security, and reduce development time. As with other, adjacent areas of technology, they want to see a step-change improvements, not incremental or bolted-on changes to obsolete architectures.

Originally presented at Industry Day on January 26, 2016

Their vision for open automation is standards-based, secure, and interoperable, which will:

  1. Promote innovation & value creation
  2. Effortlessly integrate best-in-class components
  3. Afford access to leading-edge capability & performance
  4. Preserve the asset owner’s application software
  5. Significantly lower the cost of future replacement
  6. Employ an adaptive intrinsic security model

This vision reads like a list of Skkynet connectivity solutions features and benefits:

  1. SkkyHub, DataHub, and the ETK foster innovation and value creation by providing open-standards, real-time data connectivity for hardware and software from almost any vendor.
  2. These Skkynet tools allow users to integrate data from virtually any components.
  3. This kind of real-time data integration enables each component in turn to perform at its highest capacity.
  4. Any generation of equipment, from legacy to state-of-the-art, can be integrated.
  5. Connecting modules can be replaced, and the system itself gets continually updated.
  6. Connections from the DataHub or ETK to SkkyHub are secure by design.

We are currently in communication with Lockheed Martin, and bringing these advantages to ExxonMobil’s attention. We share their vision, and offer tested, verified, working solutions.

Fitting In with Industrial IoT

“I t all sounds fine on paper, but will it work for me?” That’s a question that engineers and system integrators often ask when the topic of Industrial IoT comes up. There are so many ways it has to fit. Industrial systems are like snowflakes–every one is unique. Each facility, factory, pipeline, or power plant was built for a particular purpose, in a different part of the world, at a specific time in plant automation history, when technology had advanced to a certain level. We see a wide range of machines, tools, sensors, and other equipment used with endless combinations of proprietary and home-grown software and data protocols. Over time, plant modifications and expansions along with hardware and software upgrades bring still more variety.

If this diversity isn’t challenge enough, new questions are now popping up about the Industrial IoT itself: How to get started? What service provider to use? What approach or platform is best to take? What are the cost benefits?

Putting all this together, it becomes clear that a good Industrial IoT solution should be a comfortable fit. It should connect to virtually any in-plant system with a minimum of fuss, and provide links to remote systems as well. It should be compatible with multiple data protocols and legacy systems, and yet also integrate seamlessly with future hardware and software. Like putting on a new suit, the ideal is to ease into the IoT without disrupting anything.

Working towards that goal, here’s what a good system should do:

  • Support diverse data communication protocols: OPC, both OPC “Classic” and UA, plays an important role in simplifying and unifying industrial data communications. Any Industrial IoT platform should support OPC, along with common industrial fieldbuses like Modbus, Profibus, HART, DeviceNet, and so on. It should also support more specialized standards like IEC 61850, CAN, ZigBee, and BACnet. In addition to these, Industrial IoT should be compatible with non-industrial standards like HTML and XML for web connectivity, ODBC for database connectivity, DDE for connecting to Excel if needed, as well as the ability to connect to custom programs.
  • Connect to embedded devices: The “of Things” part of the Internet of Things refers primarily to embedded devices. Sensors, actuators, and other devices are getting smaller, cheaper, and more versatile every day. They should be able to connect–either directly or via a wired or cellular gateway–to the cloud. This is an area where SCADA can provide a wealth of experience to the Industrial IoT, and in turn benefit significantly from the expanded reach that Internet connectivity can provide.
  • Work with new or legacy equipment and facilities: Since the introduction of the DCS and PLC in the 1970’s, digital automation has been growing and evolving. While new technologies are constantly being adopted or adapted, many older systems continue to run. With so much engineering, effort, and capital invested in each project, plant management is often reluctant to make changes to a working system. To be accepted in the “If it ain’t broke, don’t fix it” world, an Industrial IoT system should be able to connect to, but not intrude upon, legacy systems. Of course, for new systems, it should do likewise.
  • Use existing tools, or better: The Industrial IoT doesn’t need to reinvent the wheel. Most industrial automation systems have a solid, working set of tools, which might include DCS and SCADA systems; HMIs; MES, ERP and other kinds of databases; data historians, and more. A compatible Industrial IoT implementation should work as seamlessly as possible with all of these tools, using the appropriate protocols. At the same time, it would do well to offer connections to improved tools, if required or desired.
  • Meet Big Data requirements: Among the new tools, the ability to connect existing or future industrial systems with Big Data is one of the main attractions of the Industrial IoT. A compatible Industrial IoT solution should provide connectivity and the performance necessary to feed whatever Big Data engine may be chosen.
  • Allow for gradual implementation: Automation experts and proponents of the Industrial IoT are quick to point out that there is no need to implement this all at once. They often recommend a gradual, step-by-step implementation process. Start with a small data set, an isolated process or system, and build from there. Bring in users as needed. Once you are comfortable with the tools and techniques, you can build out. Naturally, you’ll need an IoT platform that supports this approach.

How Skkynet Fits

With Skkynet, compatibility for the Industrial IoT comes in three components that work seamlessly together: DataHub®, Embedded Toolkit (ETK), and SkkyHub™.

The Cogent DataHub® connects directly to in-plant systems via OPC, Modbus, ODBC and DDE, and is fully integrated with the Red Lion Data Station Plus, to connect to 300 additional industrial protocols. The DataHub supports data aggregation, server-to-server bridging, database logging, redundancy, and other data integration functionality. It also offers WebView, a flexible, web-based HMI.

The Embedded Toolkit (ETK) is a C library that provides the building blocks for embedded systems to connect and communicate with SkkyHub or the DataHub. It has been compiled to run on gateways from Red Lion, B+B SmartWorx, NetComm, and SysLINK, as well as devices from Renesas, Lantronix, Raspberry Pi, Arduino, ARM, and more.

These two components can be connected to and integrated with virtually any industrial system. They can be used separately or together, and can serve as the first stage of evolution towards the cloud at any time, by connecting to SkkyHub.

The SkkyHub™ service collects and distributes real-time data over networks, both locally and remotely. Connecting to the DataHub or any ETK-enabled device, SkkyHub provides secure networking of Industrial IoT data between remote locations, and remote monitoring and supervisory control through WebView.

Skkynet’s Industrial IoT software and services are in wide use today. You can find them connecting manufacturing facilities, wind and solar farms, offshore platforms, mines, pipelines, production lines, gauges, pumps, valves, actuators, and sensors. Their unique combination of security, speed, and compatibility with virtually any industrial system makes the DataHub, ETK, and SkkyHub well-fitting components of the Industrial IoT.

Renesas Electronics Expands Renesas Synergy™ Platform for IoT

Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced the expansion of its Renesas Synergy™ Platform designed to accelerate time to market, reduce total cost of ownership and remove many of the obstacles engineers face when designing devices for the Internet of Things (IoT). Expansion includes launch of the new S124 Group of Synergy Microcontrollers (MCUs) with ultra-low power operating characteristics and precise analog signal acquisition/generation capabilities ideal for sensor applications. In support of these new MCUs is an updated version of the Synergy Software Package (SSP) and the e² studio Integrated Solution Development Environment (ISDE) tool. The SSP and e² studio tool also incorporate further enhancements that address the entire Synergy Platform adding new capabilities for networking, industrial automation, power management and automated configuration to save even more precious time for embedded system developers.

“The Synergy Platform continues to grow in value to both developers and their end-customers,” said Mark Rootz, Marketing Director of Renesas’ Internet of Things BU. “This new S124 Synergy MCU Group is another example of platform growth that brings ARM® Cortex®-M0+ based MCUs to the lower end of the application spectrum while remaining completely scalable and compatible with the companion Cortex®-M4 based Synergy MCU groups above it that we launched last year. Software support for these new S124 MCUs is there by expansion of the SSP enabling customers to quickly and easily migrate between all Synergy MCU groups as their needs change and still be able to re-use existing application code. We continue to evolve all elements of the Synergy Platform and build value as demonstrated here with new MCUs, new software, plus ever-growing tool and partner support for the platform.”

Overall Synergy Platform expansion continues globally with the addition of five new Verified Software Add-on (VSA) products from Europe and Japan to be available on the Synergy Gallery in spring 2016. VSA software from third-parties are verified by Renesas to be SSP-compatible for developers to easily add specialty functions to their Synergy Platform-based projects with confidence. New global VSA partners address specialized functions in the areas of home and industrial automation including Echonet, CANopen, and BACnet, plus secure communications, and cloud services. These US-based VSA products are now fully available on the Synergy Gallery – Cypherbridge Systems SDKPac for Synergy secure IoT and web connectivity including SSL/TLS, Icon Labs for security services including firewall and secure boot, and Skkynet for secure real-time data connectivity, on premise or cloud-based (SaaS).

Tunnelling OPC DA – Know Your Options

Since OPC was introduced over fifteen years ago, it has seen a steady rise in popularity within the process control industry. Using OPC, automation professionals can now select from a wide range of client applications to connect to their PLCs and hardware devices. The freedom to choose the most suitable OPC client application for the job has created an interest in drawing data from more places in the plant. Industry-wide, we are seeing a growing need to connect OPC clients on one computer to OPC servers on other, networked computers. As OPC has grown, so has the need to network it.

The most widely-used OPC protocol for real-time data access is OPC DA.  However, anyone who has attempted to network OPC DA knows that it is challenging, at best. The networking protocol for OPC DA is DCOM, which was not designed for real-time data transfer. DCOM is difficult to configure, responds poorly to network breaks, and has serious security flaws. Using DCOM between different LANs, such as connecting between manufacturing and corporate LANs, is sometimes impossible to configure. Using OPC DA over DCOM also requires more network traffic than some networks can handle because of bandwidth limitations, or due to the high traffic already on the system. To overcome these limitations, there are various tunnelling solutions on the market. This article will look at how tunneling OPC DA solves the issues associated with DCOM, and show you what to look for in a tunnelling product.

Eliminating DCOM

The goal of tunnelling OPC DA is to eliminate DCOM, which is commonly done by replacing the DCOM networking protocol with TCP. Instead of connecting the OPC client to a networked OPC server, the client program connects to a local tunnelling application, which acts as a local OPC server. The tunnelling application accepts requests from the OPC client and converts them to TCP messages, which are then sent across the network to a companion tunnelling application on the OPC server computer. There the request is converted back to OPC DA and is sent to the OPC server application for processing. Any response from the server is sent back across the tunnel to the OPC client application in the same manner.

OPC Tunnelling

This is how most tunnellers for OPC DA work, in principle. A closer look will show us that although all of them eliminate DCOM, there are some fundamentally different approaches to tunnelling architecture that lead to distinctly different results in practice. As you review tunnelling solutions, here are four things to look out for:

  1. Does the tunnelling product extend OPC transactions across the network, or does it keep all OPC transactions local?
  2. What happens to the OPC client and server during a network break?
  3. How does the tunnel support multiple client-server connections?
  4. Does the tunnelling product provide security, including data encryption, user authentication, and authorization?

1. Extended or Local OPC Transactions?

There are two basic types of tunnelling products on the market today, each with a different approach to the problem. The first approach extends the OPC transaction across the network link, while the second approach keeps all OPC transactions local to the sending or receiving computer.

OPC Tunnelling Comparison

Extending the OPC transaction across the network means that a typical OPC client request is passed across the network to the OPC server, and the server’s response is then passed all the way back to the client. Unfortunately, this approach preserves the synchronous nature of DCOM over the link, with all of its negative effects. It exposes every OPC client-server transaction to network issues like timeouts, delays, and blocking behavior. Link monitoring can reduce these effects, but it doesn’t eliminate them, as we shall see below.

On the other hand, the local OPC transaction approach limits the client and server OPC transactions to their respective local machines. For example, when the tunnelling program receives an OPC client request, it responds immediately to the OPC client with data from a locally cached copy. At the other end, the same thing happens. The tunnelling program’s job is then to maintain the two copies of the data (client side and server side) in constant synchronization. This can be done very efficiently without interfering with the function of the client and server. The result is that the data crosses the network as little as possible, and both OPC server and OPC client are protected from all network irregularities.

2. Handling Network Issues

There is a huge variety of network speeds and capabilities, ranging from robust LANs, to WANs running over T1 lines on multi-node internets, and on down to low-throughput satellite connections. The best tunnelling products give the best possible performance over any given kind of network.

To protect against network irregularities and breaks, any good tunnelling application will offer some kind of link monitoring. Typically this done with a “heartbeat” message, where the two tunnel programs send messages to one another on a timed interval, for example every few seconds. If a reply isn’t received back within a user-specified time, the tunnelling application assumes that the network is down. The OPC client and server may then be informed that the network is broken.

In practice this sounds simple. The problem arises when you have to specify the timeout used to identify a network disconnection. If you set the timeout too long, the client may block for a long time waiting for a reply, only to discover that the network is down. On the other hand, setting the timeout too short will give you false indications of a network failure if for some reason the connection latency exceeds your expectations. The slower the network, the greater the timeout must be.

However, this balancing act is only necessary if the tunnelling product uses the extended OPC approach. A product that offers local OPC transactions still provides link monitoring, but the OPC client and server are decoupled from the network failure detection. Consequently, the timeout can be set appropriately for the network characteristics—from a few hundred milliseconds for highly robust networks to many seconds, even minutes for extremely slow networks—without the risk of blocking the OPC client or server.

How the tunnelling product informs your OPC client of the network break also varies according to the tunnel product design. Products that extend the OPC transactions generally do one of two things:

  1. Synthesize an OPC server shutdown. The OPC client receives a shutdown message that appears to be coming from the server. Unaware of the network failure, the client instead operates under the assumption that the OPC server itself has stopped functioning.
  2. Tell the client nothing, and generate a COM failure the next time the client initiates a transaction. This has two drawbacks. First the client must be able to deal with COM failures, the most likely event to crash a client. Worse yet, since OPC clients often operate in a “wait” state without initiating transactions, the client may think the last data values are valid and up-to-date, never realizing that there is any problem.

Products that provide local OPC transactions offer a third option:

  1. Maintain the COM connection throughout the network failure, and alter the quality of the data items to “Not Connected” or something similar. This approach keeps the OPC connection open in a simple and robust way, and the client doesn’t have to handle COM disconnects.

3. Support for Multiple Connections

Every tunnelling connection has an associated cost in network load. Tunnelling products that extend OPC transactions across the network may allow many clients to connect through the same tunnel, but each client sends and receives data independently. For each connected client the network bandwidth usage increases. Tunnelling products that satisfy OPC transactions locally can handle any number of clients and servers on either end of the tunnel, and the data flows across the network only once. Consequently, adding clients to the system will not add load to the network. In a resource-constrained system, this can be a crucial factor in the success of the control application.

If you are considering multiple tunnelling connections, be sure to test for cross-coupling between clients. Does a time-intensive request from a slow client block other requests from being handled? Some tunnelling applications serialize access to the OPC server when multiple clients are connected, handling the requests one by one. This may simplify the tunnel vendor’s code, but it can produce unacceptable application behavior. If one client makes a time-consuming request via the tunnel, then other clients must line up and wait until that request completes before their own requests will be serviced. All clients block for the duration of the longest request by any client, reducing system performance and increasing latency dramatically.

On the other hand, if the tunnel satisfies OPC requests locally, this situation simply does not happen. The OPC transactions do not cross the network, so they are not subject to network effects nor to serialization across the tunnel.

4. What About Security?

Whenever you get involved in networking plant data, security is a key concern. In fact, security is a primary reason for choosing tunnelling over DCOM. DCOM was never intended for use over a wide area network, so its security model is primarily designed to be easily configured only on a centrally administered LAN. Even making DCOM security work between two different segments of the same LAN can be extremely difficult. One approach to DCOM security is to firewall the whole system, so that nothing gets in or out, then relax the security settings on the computers inside the firewall. This is perhaps the best solution on a trusted network, but it is not always an option. Sometimes you have to transmit data out through the firewall to send your data across a WAN or even the Internet. In those cases, you are going to want a secure connection. Relaxed DCOM settings are simply not acceptable.

Most experts agree that there are three aspects to network security:

  • Data encryption is necessary to prevent anyone who is sniffing around on the network from reading your raw data.
  • User authentication validates each connecting user, based on their user name and password, or some other shared secret such as a private/public key pair.
  • Authorization establishes permissions for each of those authenticated users, and gives access to the appropriate functionality.

There are several options open to tunneling vendors to provide these three types of security. Some choose to develop their own security solution from the ground up. Others use standard products or protocols that many users are familiar with. These include:

SSL (Secure Socket Layer) – Provides data encryption only, but is very convenient for the user. Typically, you just check a box in the product to activate SSL data encryption. The tunneling product must provide user authentication and authorization separately.

VPN (Virtual Private Network) – Provides both encryption and authentication. VPN does not come as part of the product, per se, but instead is implemented by the operating system. The tunneling product then runs over the VPN, but still needs to handle authorization itself.

SSH (Secure Shell) Tunneling – Provides encryption and authentication to a TCP connection. This protocol is more widely used in UNIX and Linux applications, but can be effective in MS-Windows. SSH Tunnelling can be thought of as a kind of point-to-point VPN.

As none of these standard protocols covers all the three areas, you should ensure that the tunnelling product you chose fills in the missing pieces. For example, don’t overlook authorization. The last thing you need is for some enterprising young apprentice or intern to inadvertently link in to your live, production system and start tweaking data items.

How Can You Know? Test!

The concept of tunnelling OPC DA is still new to many of us. Vendors of tunnelling products for OPC DA spend a good deal of time and energy just getting the basic point across: eliminate the hassles of DCOM by using TCP across the network. Less attention has been put on the products themselves, and their design. As we have seen, though, these details can mean all the difference between a robust, secure connection, or something significantly less.

How can you know what you are getting? Gather as much information as you can from the vendor, and then test the system. Download and install a few likely products. (Most offer a time-limited demo.) As much as possible, replicate your intended production system. Put a heavy load on it. Pull out a network cable and see what happens. Connect multiple clients, if that’s what you plan to do. Configure the security. Also consider other factors such as ease of use, OPC compliance, and how the software works with other OPC-related tasks you need to do.

If you are fed up with DCOM, tunnelling OPC DA provides a very good alternative. It is a handy option that any engineer or system integrator should be aware of. At the very least, you should certainly find it an improvement over configuring DCOM. And with the proper tools and approach, you can also make it as robust and secure as your network will possibly allow.

Download White Paper (PDF)

Skkynet to Demo Complete Industrial IoT Solution at ARC Industry Forum in Orlando

Secure, fully integrated, end-to-end solution takes the Industrial Internet of Things far beyond the hype stage.

Mississauga, Ontario, January 19, 2016 – Skkynet Cloud Systems, Inc. (“Skkynet” or “the Company”) (OTCQB:SKKY), a global leader in real-time cloud information systems, will demonstrate its end-to-end Industrial IoT solution—including the SkkyHub™ service, DataHub® industrial middleware, and Embedded Toolkit—at the Nineteenth Annual ARC Industry Forum, “Industry in Transition: Navigating the New Age of Innovation” in Orlando, Florida, February 8-11, 2016, hosted by the Arc Advisory Group.

According to ARC Industry Forum organizers, concepts such as Industrial Internet of Things (IIoT), Smart Manufacturing, Industrie 4.0, Digitization, and Connected Enterprise are “clearly moving past the hype stage to the point where real solutions are emerging backed by strong associated business cases.”

“Skkynet will demonstrate a secure, hands-on, end-to-end Industrial IoT solution that works right out of the box,” said Paul Thomas, President of Skkynet. “Now anyone can safely send data from their remote device to our SkkyHub service, and view their data in a web browser. Or they can securely connect an industrial system via our Cogent DataHub, and do remote monitoring and supervisory control from anywhere in the world.”

Skkynet’s SkkyHub service allows industrial and embedded systems to securely network live data in real time from any location. It enables bidirectional supervisory control, integration and sharing of data with multiple users, and real-time access to selected data sets in a web browser. The service is capable of handling over 50,000 data changes per second per client, at speeds of just microseconds over Internet latency. Secure by design, it requires no VPN, no open firewall ports, no special programming, and no additional hardware.

With Skkynet’s Embedded Toolkit, SkkyHub can connect to remote devices through gateways from Red Lion, Advantech B+B SmartWorx, NetComm, or SysLINK, or directly to Renesas Synergy embedded chips. Linked to Skkynet’s DataHub industrial middleware, SkkyHub can securely connect to virtually any industrial system using standard protocols such as OPC, Modbus, TCP, and ODBC.

“While some companies are just now waking up to the potential of the Industrial IoT, Skkynet and its subsidiaries have been active in the field of secure, real-time industrial data communications for over a decade,” said Mr. Thomas. “Our products and services are working in hundreds of mission-critical systems worldwide.”

About Skkynet Cloud Systems, Inc.

Skkynet Cloud Systems, Inc. (OTCQB:SKKY) is a global leader in real-time cloud information systems. The Skkynet Connected Systems platform includes the award-winning SkkyHub™ service, DataHub®, WebView™, and embedded toolkit software. The platform enables real-time data connectivity for industrial, embedded, and financial systems, with no programming required. Skkynet’s platform is uniquely positioned for the “Internet of Things” and “Industry 4.0” because unlike the traditional approach for networked systems, SkkyHub is secure-by-design. Customers include Microsoft, Siemens, Metso, ABB, Honeywell, IBM, GE, Statoil, Goodyear, BASF, Cadbury Chocolate, and the Bank of Canada. For more information, see http://skkynet.com.

Safe Harbor:

This news release contains “forward-looking statements” as that term is defined in the United States Securities Act of 1933, as amended and the Securities Exchange Act of 1934, as amended. Statements in this press release that are not purely historical are forward-looking statements, including beliefs, plans, expectations or intentions regarding the future, and results of new business opportunities. Actual results could differ from those projected in any forward-looking statements due to numerous factors, such as the inherent uncertainties associated with new business opportunities and development stage companies. We assume no obligation to update the forward-looking statements. Although we believe that any beliefs, plans, expectations and intentions contained in this press release are reasonable, there can be no assurance that they will prove to be accurate. Investors should refer to the risk factors disclosure outlined in our annual report on Form 10-K for the most recent fiscal year, our quarterly reports on Form 10-Q and other periodic reports filed from time-to-time with the Securities and Exchange Commission.