Tolga Talks Tech: Containers

Tolga Talks Tech is a weekly video series in which Onica’s CTO Tolga Tarhan tackles technical topics related to AWS and cloud computing. This week, Tolga is discussing Containers with Nate Fox, Onica’s Practice Director for Cloud Native Development. For more videos in this series, click here.

We want to talk about customers that are adopting hybrid and multi-cloud environments. In the past couple of years, as customers have looked into ways to accomplish these tasks, we’ve often told them, “Hey there’s not a really good abstraction to move applications portably between these environments”. Nowadays, we are talking a lot more about containers as possibly this extraction – what’s changed, and what’s the state of the art right now?

Right now the ability to run containers in multi-cloud is actually a lot easier and the barrier to entry has come down. It hasn’t disappeared, but it’s become a lot easier. So you have your cloud providers’ native ability to run containers and their own orchestration layers. Now you have Kubernetes, which has frankly taken over the world. Kubernetes is great for having one easy way of deploying and effectively running and describing an entire environment. However, one of the big drawbacks is that every cloud provider has their own master tier, and their own orchestration layers for Kubernetes. It is a little bit trickier to have one central place and then actually run all the workloads in multiple different clouds because your workers may not be on the same cloud as your masters. So, the barrier to entry has come down, but its not completely gone.

Within one cloud, you can leverage the managed Kubernetes runtime of the cloud provider, and you can do that separately in each cloud. The real gain here, it sounds like, is around a common way to package the application and to describe the environment it needs to operate in.

Once you have that running system, with Terraform you can write out something for Google Cloud, something for Azure, and something for AWS, and get your managed Kubernetes running there. Then you need to be able to deploy it. That deployment mechanism is going to be the same across all three. So yes, there are a couple things that you have to do per cloud, but once you have that done, running it is all going to be in the same distribution, in the same code repo that goes along with your same Docker files. You’re able to do infrastructure as code a lot easier, and almost app deployment as code because everything is run the same way across all environments.

It’s not just about the packaged up contents of the Docker container, it’s also about all the other requirements that are codified in the Kubernetes deployment. Can you tell us more about what that includes?

You have your services which are basically load balancing your requests. A lot of your CRDs which is the ability to request things, for example, if an application needs an SQS queue – in AWS it would be an SQS queue, in Azure, it would be another queuing system and Google has their own queuing system. So you can tell Kubernetes, hey I need this one thing to run. Kubernetes will go out and get that for you, instead of each application having to request its own queuing type for each cloud. You can tell Kubernetes, get me this one thing, or get me your persistence storage. You’re going to have different pieces of storage and you’re effectively telling Kubernetes, and Kubernetes is your common API layer rather than Amazon’s API, Azure’s API, & Google’s API.

So if you’ve got a traditional on-prem environment today that’s based on VMs and you are looking to go to the cloud, when would you recommend adopting containers versus continuing to move forward in a VM based world?

What we are seeing now is that VMs aren’t really the first thing we go to. We’re really seeing a lot of container first stuff. Back in the day, most people didn’t know what their application needed and they just assumed they needed their whole operating install. That’s sometimes still a little bit true. We also see a lot of yum install off the internet for their Docker file and it just downloads everything. Ultimately, once you can codify your dependencies, and sometimes it takes a few tries to get the right dependencies, you’re pretty much in a Docker file. The large containers are a little bit slower to download but with Docker’s copy-on-write file system, it makes the layers a lot faster. With the faster deployments, I have seen multi-gb containers, but after its been deployed once, they get redeployed on the same host quickly; even updates to the same software. So as long as you are structuring your Docker files fairly well, the containers, even big containers can go out really quickly.

If you are interested in more information regarding containers and deployment, download our Understanding Container Services on AWS whitepaper.

Explore More Cloud Insights from Onica

Blogs

The latest perspectives on navigating an ever-changing cloud landscape

Case Studies

Explore how our customers are driving cloud innovation in their industries

Videos

Watch an on-demand library of cloud tutorials, tips and tricks

Publications

Learn how to succeed in the cloud with deep-dives into pressing cloud topics