Tuesday, January 26, 2021

Containers: Visualization throughout the years

Posted by Kyle Carreau

As the worlds of IT and OT continue to blend, the factory floor is starting to see more IT focused technologies. First, with the introduction of Ethernet based protocols to the shop floor, then with managed switches, routers, and network management systems. At this point, most are familiar with Cloud/Edge technologies, with companies like Microsoft Azure or AWS, and the immense value they bring to achieving the highly sought after “Industry 4.0” factory. However, some companies will find themselves in pilot purgatory; their proof of concept showed tremendous business value, but they failed to solve a key IIoT challenge – scalability. This is where Containerization and Container orchestration, popular IT technologies, can be applied in an OT environment and solve the scalability challenge.

Back in the early days, many SCADA or other OT software deployments ran on physical servers. While this solution is still viable, there are downfalls in terms of resource allocation, ease of deployment, management, and overall cost. Fast forward a bit, and virtualization software, like VMware, becomes introduced into the OT space to mitigate some of the downfalls of having only physical servers. With virtualization technology, multiple virtual machines (VMs) could run on a single physical server. Each of these VMs included their own operating system (independent from the host OS), bin/libraries, and different applications installed on them. This allowed the OT space to create architectures that were more distributed and robust, on a single piece of hardware. This solution is pretty much the standard in OT environments, but there are limitations when we start to move towards the edge.

Virtualized environments can be extremely “heavy” for some simple deployments. For example, if you wanted to deploy a lightweight application (let’s say Thingworx Kepware Server) on a single VM, that VM would contain a full instance of a Windows Operating System, along with a handful of services and processes that are not critical for TKS to run. Essentially, you now have wasted resources that are taking up a large amount of space. In a cloud/edge architecture, your edge hardware is typically going to be more “lightweight” in terms of CPU size and RAM – this is mostly due to reducing the overall cost of a large scaled system. So now the problem presents itself; we need to be able to deploy our software to these edge devices without sacrificing functionality but deploying multiple VMs is going to be too “heavy” for the edge device.  This is where containers steps in.

Think of a container as a single application running on an OS – but only utilizing the resources it needs from that OS. At the very highest level, it is a VM that is “trimmed of the fat”.  What this means, is that we can deploy more applications on edge hardware because the containerized application is not dependent on a full virtualized operating system. Companies like Microsoft Azure and AWS are utilizing open sourced technology from Kubernetes to help their customers deploy, manage, and build containers, as well as conduct “container orchestration”.

The latest release of Thingworx Kepware Edge (v1.2) is available as a docker container. While TKE in a container will help companies solve their connectivity and scalability challenges, we are a small part of the overall infrastructure. This technology is new to the OT world, and we would encourage you and your customers to research technologies like Docker and Kubernetes and explore different ways to architect an overall IoT solution. At the end of the day, Kepware’s focus and expertise will always be in industrial connectivity, but we will continue to adapt to the ever evolving IoT space.