Running containerized applications with Amazon EKS is a popular choice, but one that still requires a certain amount of manual configuration. If you’re using an ephemeral or cluster-on-demand infrastructure, many times spot instances are the best bang for your buck. There are a number of ways to get cheaper instances for your kops cluster, and we’ll go through them here.
How do AWS Spot Instances work?
A Spot Instance is an unused EC2 instance that’s available for use at up to 90% less than On-Demand pricing. If you can be flexible about when your applications run and if your applications can be interrupted, then you can lower your costs significantly. Spot Instances are well-suited for data analysis, containerized workloads, batch jobs, background processing, optional tasks, and other test & development workloads.
The hourly Spot price is determined by supply and demand trends for EC2 spare capacity. You can see the current and historical Spot prices through the AWS Management Console. If the Spot price exceeds your maximum price for a given instance (or if capacity is no longer available), your instance will automatically stop.
Optimize spot instance costs for your kops clusters: Edit a running cluster
All methods revolve around kops’ cluster management YAML file. For the first example, we’ll update a running kubernetes cluster.
1. kops edit instancegroups nodes
We’ll adjust the running nodes’ maxPrice
See below if you’d like to run a master as a spot instance. Running the command kops edit instancegroups nodes
will open your editor with a YAML file. Add a line of maxPrice: "x.xx"
just above maxSize
similar to the below file:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-05-14T00:32:55Z
labels:
kops.k8s.io/cluster: nfox.k8s.local
name: nodes
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: t2.medium
maxPrice: "0.20"
maxSize: 2
minSize: 2
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:
- us-west-2a
- us-west-2b
- us-west-2c
Now save your edit.
Note: If you’d like to run your master(s) as spot instances, first you’ll need to run kops get instancegroups
That will have an ouput similar to:
Using cluster from kubectl context: nfox.k8s.local
NAME ROLE MACHINETYPE MIN MAX ZONES
master-us-west-2a Master m3.medium 1 1 us-west-2a
master-us-west-2b Master m3.medium 1 1 us-west-2b
master-us-west-2c Master m3.medium 1 1 us-west-2c
nodes Node t2.medium 2 2 us-west-2a,us-west-2b,us-west-2c
From there, use kops edit instancegroup master-us-west-2a
or whatever your other master names are. Then follow the directions above for adding maxPrice
2. kops update cluster –yes
Running the kops update cluster
command will make sure the edit is saved in S3. This will not affect running instances:
Using cluster from kubectl context: nfox.k8s.local
I0513 17:48:36.097401 4290 apply_cluster.go:456] Gossip DNS: skipping DNS validation
I0513 17:48:36.117391 4290 executor.go:91] Tasks: 0 done / 81 total; 30 can run
I0513 17:48:43.244976 4290 executor.go:91] Tasks: 30 done / 81 total; 26 can run
I0513 17:48:47.490211 4290 executor.go:91] Tasks: 56 done / 81 total; 21 can run
I0513 17:48:52.851132 4290 executor.go:91] Tasks: 77 done / 81 total; 3 can run
I0513 17:49:00.360321 4290 executor.go:91] Tasks: 80 done / 81 total; 1 can run
I0513 17:49:03.881453 4290 executor.go:91] Tasks: 81 done / 81 total; 0 can run
Will modify resources:
EBSVolume/a.etcd-events.nfox.k8s.local
Tags {Name: a.etcd-events.nfox.k8s.local, k8s.io/etcd/events: a/a, CreatedAt: 2018-05-14T00:33:41Z, PrincipalId: AIDAJRBCBENPOERCO7U2C, kubernetes.io/cluster/nfox.k8s.local: owned, k8s.io/role/master: 1, Owner: nfox, KubernetesCluster: nfox.k8s.local} -> {k8s.io/etcd/events: a/a, k8s.io/role/master: 1, kubernetes.io/cluster/nfox.k8s.local: owned, Name: a.etcd-events.nfox.k8s.local, KubernetesCluster: nfox.k8s.local}
EBSVolume/a.etcd-main.nfox.k8s.local
Tags {Name: a.etcd-main.nfox.k8s.local, kubernetes.io/cluster/nfox.k8s.local: owned, k8s.io/etcd/main: a/a, Owner: nfox, KubernetesCluster: nfox.k8s.local, k8s.io/role/master: 1, PrincipalId: AIDAJRBCBENPOERCO7U2C, CreatedAt: 2018-05-14T00:33:41Z} -> {KubernetesCluster: nfox.k8s.local, k8s.io/etcd/main: a/a, k8s.io/role/master: 1, kubernetes.io/cluster/nfox.k8s.local: owned, Name: a.etcd-main.nfox.k8s.local}
LaunchConfiguration/nodes.nfox.k8s.local
SpotPrice -> 0.20
Must specify --yes to apply changes
Notice the SpotPrice -> 0.20
under LaunchConfiguration/nodes.nfox.k8s.local
Now, add the --yes
flag to your command and you should see an output like this:
$ kops update cluster --yes
Using cluster from kubectl context: nfox.k8s.local
I0513 17:52:42.256531 4319 apply_cluster.go:456] Gossip DNS: skipping DNS validation
I0513 17:52:48.555758 4319 executor.go:91] Tasks: 0 done / 81 total; 30 can run
I0513 17:52:53.729854 4319 executor.go:91] Tasks: 30 done / 81 total; 26 can run
I0513 17:52:59.025025 4319 executor.go:91] Tasks: 56 done / 81 total; 21 can run
I0513 17:53:09.758565 4319 executor.go:91] Tasks: 77 done / 81 total; 3 can run
I0513 17:53:17.065805 4319 executor.go:91] Tasks: 80 done / 81 total; 1 can run
I0513 17:53:17.868182 4319 executor.go:91] Tasks: 81 done / 81 total; 0 can run
I0513 17:53:18.945159 4319 update_cluster.go:291] Exporting kubecfg for cluster
kops has set your kubectl context to nfox.k8s.local
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
3. kops rolling-update cluster [–yes]
You’ll probably want to run kops rolling-update cluster
first. This will give a report of what kops will do. Usually, your report will look something like the following:
Using cluster from kubectl context: nfox.k8s.local
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-west-2a Ready 0 1 1 1 1
nodes NeedsUpdate 2 0 2 2 2
Must specify --yes to rolling-update.
Once you’re happy, run kops rolling-update cluster --yes
– at this point kops will start doing a rolling update of all of your instances. It’ll first empty an instance, terminate it and replace it with a spot price instance.
The output would be similar to:
Using cluster from kubectl context: nfox.k8s.local
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-west-2a Ready 0 1 1 1 1
nodes NeedsUpdate 2 0 2 2 2
I0513 17:55:46.522672 4373 instancegroups.go:157] Draining the node: "ip-172-20-121-136.us-west-2.compute.internal".
node "ip-172-20-121-136.us-west-2.compute.internal" cordoned
node "ip-172-20-121-136.us-west-2.compute.internal" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: kube-proxy-ip-172-20-121-136.us-west-2.compute.internal
pod "kube-dns-7785f4d7dc-8bp8k" evicted
node "ip-172-20-121-136.us-west-2.compute.internal" drained
I0513 17:57:48.606634 4373 instancegroups.go:273] Stopping instance "i-011e95b109a3cb8b7", node "ip-172-20-121-136.us-west-2.compute.internal", in group "nodes.nfox.k8s.local".
I0513 18:01:52.214334 4373 instancegroups.go:188] Validating the cluster.
I0513 18:02:10.677302 4373 instancegroups.go:249] Cluster validated.
I0513 18:02:10.677326 4373 instancegroups.go:157] Draining the node: "ip-172-20-71-212.us-west-2.compute.internal".
node "ip-172-20-71-212.us-west-2.compute.internal" cordoned
node "ip-172-20-71-212.us-west-2.compute.internal" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: kube-proxy-ip-172-20-71-212.us-west-2.compute.internal
pod "kube-dns-autoscaler-787d59df8f-ntwpj" evicted
pod "kube-dns-7785f4d7dc-g5wqp" evicted
pod "kube-dns-7785f4d7dc-gk25c" evicted
node "ip-172-20-71-212.us-west-2.compute.internal" drained
I0513 18:03:54.413330 4373 instancegroups.go:273] Stopping instance "i-0a5b2c028f081f1f1", node "ip-172-20-71-212.us-west-2.compute.internal", in group "nodes.nfox.k8s.local".
I0513 18:07:58.058755 4373 instancegroups.go:188] Validating the cluster.
I0513 18:08:14.287260 4373 instancegroups.go:249] Cluster validated.
I0513 18:08:14.287333 4373 rollingupdate.go:193] Rolling update completed for cluster "nfox.k8s.local"!
Now I can validate my nodes are running with spot instances with this command:
aws ec2 describe-instances \
--filters \
Name=tag-key,Values=k8s.io/role/node \
Name=instance-state-name,Values=running \
--query 'Reservations[].Instances[].{
SpotReq: SpotInstanceRequestId,
Id: InstanceId,
Name: Tags[?Key==`Name`].Value|[0]}' \
--output table
And the output should show something similar in your environment:
-----------------------------------------------------------------
| DescribeInstances |
+---------------------+------------------------+----------------+
| Id | Name | SpotReq |
+---------------------+------------------------+----------------+
| i-0b6cb27b8409fbd86| nodes.nfox.k8s.local | sir-3e2r8gbn |
| i-0d62dcc142dbd7cb8| nodes.nfox.k8s.local | sir-e87gbhpn |
+---------------------+------------------------+----------------+
Create a cluster with spot pricing
In a perfect world, you’d create a cluster from the ground up with spot pricing instead of on demand. However, there is no way to utilize spot pricing from the kops command line. The only way to do it is to use a YAML file:
1. Create a YAML file from your existing kops create cluster command
We’ll modify your existing kops create cluster
command to create a YAML file. For example, if our original cluster command is:
kops create cluster \
--name nfox.k8s.local \
--zones=us-west-2a,us-west-2b,us-west-2c \
--state=s3://my-kops-bucket
Note: you’ll need an S3 bucket created beforehand to hold the kops state. In this case, it’s `my-kops-bucket`.
You’d simply add the following to your command --dry-run --output yaml
– though you’ll want to capture it to a file. So we’ll pipe it through tee
kops create cluster \
--name nfox.k8s.local \
--zones=us-west-2a,us-west-2b,us-west-2c \
--state=s3://my-kops-bucket \
--dry-run --output yaml | tee nfox.k8s.local.yaml
2. Edit the YAML file
Now we’d edit the nfox.k8s.local.yaml
file like above:
- Scroll to the bottom.
- Edit the
InstanceGroup
by adding themaxPrice: "x.xx"
line just abovemaxSize
- Save the file.
3. Create the cluster with 3 commands
Finally, create the cluster with the YAML file. However, we’ll need 3 commands to properly create the cluster:
kops create -f nfox.k8s.local.yaml
kops create secret sshpublickey admin -i ~/.ssh/id_rsa
kops update cluster --yes
Because the kops create cluster
command line handles the SSH key for you, we need to create the admin user with our own ssh private key before we actually apply everything to the cluster (3rd command).
While Amazon EKS helps simplify running Kubernetes workflows, we hope this quick guide will prove helpful in optimizing instance costs for your kops clusters!
Want to read more AWS How-To Guides from Onica?
Check out our resources at: https://onica.com/resources/