Tolga Talks Tech is a weekly video series in which Onica’s CTO Tolga Tarhan tackles technical topics related to AWS and cloud computing. This week, Tolga discusses Amazon ECS vs. Amazon EKS with Nate Fox, Engineering Director at Onica. For more videos in this series, click here.
What is ECS and EKS?
ECS and EKS are both container management platforms. Amazon Elastic Container Service (ECS) is AWS’ proprietary container orchestration platform. Amazon Elastic Container Service for Kubernetes (EKS) is AWS’ managed Kubernetes service. Basically, its running Kubernetes (the newest orchestration platform for Docker containers) and manages not only your Docker containers across multiple machines, but also things like load balancer, storage, and secrets, amongst a bunch of other things.
How is EKS different from Amazon ECS?
- Amazon ECS run their own containers and their own systems behind the scenes for you. Whereas, Amazon EKS runs Kubernetes, the open-source system that is managed by the Cloud Native Computing Foundation (CNCF).
- EKS offers all the features of ECS, plus other features like VPC for pod networking and isolation at cluster-level. Kubernetes has a very large open-source community around it, which enables a lot of different software and capabilities in the system.
- Being Amazon’s own creation, ECS has great integration with other AWS services, whereas it can be more cumbersome with EKS.
How is EKS different from running Kubernetes on Amazon EC2?
Running Kubernetes on Amazon EC2 means you have to run your own masters. Your masters are typically comprised of etcd as well as the API server. Amazon EKS will run six machines, three of them etcd and three of them API servers, and they’ll manage them for you. That means managing upgrades for you, availability across AZs, and security as well, so it’s all integrated with IAM. It’s a really good package solution.
Best practice for starting Amazon EKS
One tip would be that the person or the role who creates the cluster is the only one who has the ability to add additional permissions. So if you have an automated pipeline, and your Jenkins user is the one that creates the role, only that Jenkins user can add more people to be able to do other things with kubectl such as list pods or access other name spaces.
So best practice is to immediately deploy other roles that can access the cluster. You want to keep all your roles in configuration management or in GitHub or some kind of source control, and you want to be able to know exactly what changed and when, but rolling them out with your deployment is the best solution.