Microservices

Orchestrating and Deploying Containers

To understand the current and future state of containers, we gathered insights from 33 IT executives who are actively using containers. We asked, “What are the most important elements to orchestrating and deploying containers?”

Here’s what they told us:

Security

  • It’s critical to have control over what ends up in different container environments. Open source is a major resource to development but if it’s is corrupted, you introduce malware and vulnerabilities into your application. What makes customers adopt healthy security practices? Start security from the beginning by analyzing the deployment files for your containers that need to be optimized for best security practices and analyzing runtime behavior and build segmentation policies around it. Code from white-listed repositories. Try to balance between low-hanging fruit and big security issues. Optimize security best practices. Runtime security checks of the microservices environment. Have end-to-end control of risk and change.
  • When it comes to deployment and orchestration, it is paramount that DevOps teams have control over whatever ends up running in their different environments given the continuous “everything” era we are in, and realizing open source as being a major innovation velocity enabler. Getting a handle on issues as early as during the test stage can benefit DevSecOps teams in the short- and long-term, by having runtime controls and risk analysis in each phase of the application delivery lifecycle.
  • Like any new technology, care has to be taken to understand the benefits and potential pitfalls of containers and orchestration technologies. Kubernetes (K8s), for example, is a wonderful tool but can lead to compromised security if configured incorrectly.
  • Essential elements for container orchestration include networking, storage, security and monitoring, and management capabilities. While the rapid pace with which container ecosystems and orchestration solutions are advancing is overwhelmingly positive for enterprises, this fast adoption comes with a slight double edge: keeping on top of new functionalities (and probably particularly so with respect to security) becomes critical.
  • Security and SecOps fundamentally understand containers that have a full application lifecycle. There’s a lot you can do up front to integrate security. Think about traditional functionality in orchestration. All this is available in K8s, but it needs to be secure.

Configuration

  • Orchestration of Containers:
    •  Configuration Management – Inserting state information into a container for a customer;
    •  Scaling – Ability to handle high volumes of load and extreme robustness during peak hours/times.
  • Deployment of Containers:
    •  Delivery – It should be highly available so using a centralized registry is the better choice;
    • Management – What is running on containers? How many containers are running? How many resources are being used?
    • Cost – Ability to easily spin off multiple containers in an environment can significantly reduce the operating cost of the company. People using containers want access to storage in containers. Flaws that existed are still out there. People need to understand how to layer containers following best practices. Storage plugs into containers easily.
  • A big thing that container platforms don’t do is orchestrate releases — managing the containers is not managing the release! You need a centralized way to view and manage all the activities in the pipeline, whether they’re being handled by technical people or business people. Everyone needs to be able to see into every part of the release in order to optimize the entire process. Also huge is managing the configuration complexity that happens as you scale. When you orchestrate and deploy containers, you need to make sure that you have the right settings for the right environment. And you need a way to visualize what version is where. The greater number of deployable units you have, the more you have to keep track of – so if you don’t centralize your configuration management, you’re going to spend a lot of time duplicating configurations. Also, centralizing your applications and configuration data helps you avoid vendor lock-in.
  • From a security point of view, another thing that becomes more complex with containers is patching. In the old world, if you wanted to make sure you were compliant with the latest security patches, you would patch your service directly. In the world of containers, you don’t do that, because containers are immutable. You do not go into a container and patch it. Rather, as a vulnerability comes up, you need to rebuild the container with a base image that doesn’t have that vulnerability. We build compliance and security checks into the pipeline, which helps you stay compliant and easily track who’s doing what. We also help you make sure the containers you’re deploying are always security scanned. We do this through our integrations with tools like Black Duck, Fortify, SonarQube, and Checkmarx. In fact, we just announced a new security risk dashboard that gives you instant visibility into security and compliance issues.

Other

  • Take baby steps. Decompose everything into Java services and encapsulate into Docker. This will result in fewer configuration issues. Make the orchestration management easier with K8s. I recommend taking one step at a time. There is value in the journey. A lot of learning and growth needs to happen across the team. Slow down and understand each step — what works and what doesn’t. Parking the boat in Docker-land may be enough.
  • Provide software that allows clients to create stateful containers. Customers are able to take existing applications, put them into a container to bring up, deploy quickly and failover as needed. VM is considered a heavyweight; containers are the second wave and are more lightweight. Containers cost less than a VM stack. Containers allow you to quickly deploy and spin up for developers to test against a one-time image and destroy.
  • Gone from standing up one cluster to many clusters. Container technologies serve two audiences – enterprise IT team interested in security and compliance of the applications, people developing and deploying the app who want to move fast, deploy quickly without violating compliance policy.
  • Enable containers to learn persistent workloads for large distributed data centers. Easy to spin up and take down and scale based on your application. Provide one big scalable storage cluster. Create multiple virtual disks and manage the data within. Some clients just getting started with a container with 15 to 100s of TBs of data. Now use for stateful applications.
  • The efficiency of using technology and orchestration tools. Visibility of deployment. Improving speed, deliverability, and scalability of the containers themselves. Better resource efficiency, better density. Improved isolation of failures. Better development hygiene. Uncomplicated development while complicating deployment infrastructure. Instances exist and then they disappear. Which containers are talking to other containers? Customers are running a combination of containers, VMs, on-prem. Requires a broad set of tools that run across various platforms.
  • I look at two different groups: 1) Fast moving hipsters who are ready to go, know what they want, willing to start from scratch for greenfield projects. OpenShift is an easy recommendation for faster moving language, frameworks and things like that. 2) Those migrating legacy applications want to invest in a container platform and build a business case to make the training, the investment in the platform, and building out the infrastructure.
  • As part of our managed service, we offer K8s. Most of our customers have already decided to run containers and K8s. Customers need coaching on how to implement K8s and help with our managed solution. Sometimes have legacy systems integrate with containers or K8S, legacy apps that need porting, answers around how to monitor the health of their containerized application. K8s was fairly simple two years ago but has added hundreds of APIs and features.
  • Two customer personas. Someone into cluster developments and operation falls on the DevOps side of the world. Two are application developers trying to get microservices out quickly. We focus on the cluster operator and solve making the cluster operations make manage and scaling easy. Go from zero to getting a cluster up in a matter of minutes. Easy to use, get going and don’t need to manage and orchestrate a container just focus on the application and service level code.
  • Microservices SQL and .Net into a containerized React NoSQL. For containers, we move to a containerized environment to be more robust, save money in the long run. Self-documenting with Docker files. Simplifies the deployment process. Super easy to organize.
  • Everyone is doing something with containers. The momentum has picked up in the last 12 months. Typically, K8s is the winner in what people are using. Docker is central but the avenue of access is dominated by K8s. Mesos Containerizer is still there and very large container operations tend to use Mesos rather than K8s. Microsoft is the on-premise hybrid approach that’s available now too. Run on-prem and in the cloud. K8s is the language everyone is speaking in using OpenShift and GCS.
  • Containers are really a great easy-to-use technology and containers themselves really don’t have too many pitfalls. The key issue we see is when people try to containerize environments/servers that they don’t really understand. They have a server that they’ve been hacking/patching for years and then they try to containerize it and deploy it elsewhere. That does NOT work. Containers work when you have a mature deployment process and you want to automate it. It’s really the same story as any automation technology. It’s great when you have a mature process you want to automate. But if you try to automate something you don’t really understand you quickly get into issues.
  • Containers are a great packaging and abstracting mechanism, allowing you to package dependencies easily and just a “unit” of work. This (somewhat) decouples the mechanics of scaling the applications (e.g. you just start more containers). However, beyond that, the same elements that apply to any other app/service/executable also apply to containers. These are largely summed up under the headings of repeatability, observability, immutability, and correctness.
  • The most important element is operationalization. It is a recipe for disaster to adopt a container platform in a vacuum within a single engineering team or worse, a single individual. For both success and business value to be possible, the solutions implemented should be clearly documented and augmented with service management principles. Teams should be on-boarded such that they understand the process to implement, scale, troubleshoot, and even locate running applications, particularly during service impacting events.
  • When it comes to HPC, performance is critical. Therefore, launch times and minimal overhead are key to containers displacing traditional bare-metal workloads. Another important element of orchestrating and deploying containers is integration. With persistent storage and graphical processor units (GPUs), and the like, containers have vastly improved over the past two years in this regard.
  • The most important elements of orchestrating containers are the ability to declaratively define application policies that are enforced at application runtime to maintain the desired state (e.g. the number of application pods, their types, and attributes) to ensure critical applications always remain available. Most recently, auto-scaling pods have also become a very important element to ensure predefined applications SLAs are always met. As well, the ease-of deploying containers is an important element. Companies require the ability to develop, test, and deploy container-based applications quickly and seamlessly using their CI/CD pipelines.
  • We lead with open source given our portfolio and how we go to market. It’s a bigger transition modernizing how applications are delivered. Moving away from a traditional ITIL model of self-service to a DevOps, CD model. Getting agility that enables self-service. Create a CI/CD pipeline for developers to pump in code on one side, go through a unified pipeline, and as part of that what gets baked is a container that’s dropped into a registry whether its Cloud registry or Antifactory or something else and it becomes the convenient unit to drop into your platform, K8s, Nomad, Stargate, or UCS. Don’t lose the forest for the trees. Sometimes that happens with containers. You are going through a process shift and containers are just a piece of the puzzle.
  • While it looks like the whole customer base is running and orchestrating containers, getting started is not easy. Pay attention to the entire lifecycle — runtime, build, run, respond. Properly set up the containerized application in a CI/CD pipeline, run reliably with the right performance, respond if something happens.
  • Containers address multiple use cases on the sliding spectrum of virtualization. 1) Process containers, such as OCI compliant containers, require careful scheduling, provisioning, and orchestration, as well as ongoing coordination to address use cases, such as micro-service based cloud-native application development and operations. K8s stands out as the clear winner in the open source ecosystem as a way to achieve standard operational paradigms regarding OCI container orchestration. 2) Machine containers resemble virtual machines in their behavior, but without the overhead of full virtualization. LXC containers using the LXD hypervisor can be managed locally or within an LXD cluster, which is natively included. 3) Snap application containers deliver secure, over the air updates and device management to address the challenge of immutability, verifiability, and authenticity of the contained application. 4) Independent of the container format and use case, varying levels of isolation carry varying levels of security and external protection requirements, for example through namespace isolation, mandatory access control (MAC) through AppArmor, and other system-level security protections.
  • The most important elements are auto-scaling, high availability and rolling upgrade. These aspects are real benefits.

Comment here

This site uses Akismet to reduce spam. Learn how your comment data is processed.