AWS Cost Optimization Tools & Best Practices

[rt_reading_time label=”Read Time:” postfix=”minutes” postfix_singular=”minute”]

AWS Well-Architected Framework

AWS Cost Optimization Resources

AWS Cost optimization is at the heart of the Amazon’s cloud experience. After all, the ability to do more with less–accelerate business with lower infrastructure costs–is the promise of the cloud. The cost optimization pillar of the AWS Well-Architected Framework (WAF) provides guidance on how to design, monitor, and respond to technology and business conditions to optimize your AWS infrastructure such that you pay for only the resources you need and use.

The WAF cost optimization pillar emphasizes four areas of concern: Cost-Effective Resources; Matching Supply and Demand; Expenditure Awareness; and Optimizing Over Time. This article will look at each area and review the AWS tools and best practices that can be used to address each one.*

How Do I Optimize Cloud Costs?

Ensuring your AWS resources are cost-effective is a matter of matching the right resource, at the right size, with the right payment structure. AWS recommends the following approaches to achieving a cost-effective architecture:

  1. Appropriately provisioned
  2. Right sizing
  3. Instance purchasing options
  4. Geographic selection
  5. Managed services
  1. Appropriately provisioned. Ensuring you have enough capacity, but not too much, is critical for building a cost-effective AWS architecture. There are a number of controls you can use to modify your AWS implementation using the AWS Management Console or APIs and SDKs to ensure that you can adjust to shifting demands. You can change the number of nodes on AWS Elastic Map Reduce (EMR) to adapt to increases or decreases in data processing; or you can group multiple instances of an AWS resources to enable higher density usage. Provisioning systems can be time-consuming. Amazon recommends integrating APIs and SDKs with AWS CloudWatch monitoring to automate adjustments to resource utilization.
  2. Right sizing. On AWS, right sizing means using the lowest-cost resource to execute workloads. AWS provides APIs, SDKs, and control features that allow resources to be modified as demands change. For example, you can take snapshots of Elastic Block Store (EBS) volumes and restore them to different volume types with higher IOPS and/or throughput. The best practice here is to use CloudWatch and to create custom CloudWatch logs to set alarms for resource thresholds, and trigger resource changes. It is important to select the right time period for monitoring so that it includes the highest resources usage. For example, a weekly report may not take into account end of month activities that require higher utilization, and you would risk under-provisioning your system.
  3. Purchasing options. There are three types of instances on AWS–on-demand, spot, and reserved–and each has a different pricing model. You pay for your AWS instances by the hour, and on-demand instances are the most expensive. Think of it as buying an airline ticket on the day you want to travel. You will pay higher prices than if you had reserved a seat in advance. That said, sometimes flexibility is more important than the price; or unexpected events require an on-demand change in configuration. Spot instances are usually the least expensive. You can bid on unused AWS instances for short-term use, and purchase blocks of time to reduce changes in bid pricing over the course of several hours. Reserved Instances require advanced purchase, or reservations. You prepay at a much lower rate than on-demand instance pricing. Reserved instances can be purchased in 1-3 year increments, with the price lower for longer-term commitments. Prices differ depending on the number of Availability Zones you need, and whether you want the flexibility to convert instances to different instance sizes or platforms during the term of the reservation.
  4. Geographic selection. AWS offers tools to use geographic location to reduce latency and increase reliability. AWS operates in multiple regions around the world, and you can select the region or regions that will offer your end users the best (fastest) experience. You can use a monthly calculator to model how a solution performs and what it costs different regions and compare costs. You can also use CloudFront or AWS CodeDeploy to provision a proof of concept environment in different regions, run workloads through the environments, and analyze the system costs in each region. The AWS Route 53 DNS service lets you use domain names to route user requests to the AWS region that will give your users the fastest response (latency-based routing).
  5. Managed services. AWS managed services allow you to outsource the operational work of managing a service. AWS provides managed services for databases, such as Amazon RDS and Amazon DynamoDB. Managed AWS database services can reduce the cost of database capabilities, and also free up time for your developers and database administrators. Also, AWS Lambda is a serverless compute service that allows you to execute code without base-level infrastructure costs. Charges are based on the compute time that you consume. Amazon SQS, Amazon SNS, and Amazon SES are application-level services can you can also use without paying for base-level infrastructure costs.

Matching Supply and Demand for AWS Cost Optimization

When your infrastructure supply is optimized to meet user demand, you have the opportunity to deliver services at the lowest possible cost. AWS supports a number of approaches to match supply with demand:

  1. Demand-based approach
  2. Buffer-based approach
  3. Time-based approach
  1. Demand-based approach. A demand-based approach to matching supply and demand leverages the “elasticity” of the AWS cloud. Elasticity refers to the ability to scale up and down, in and out, managing capacity and provisioning resources as demand needs change. AWS recommends using APIs or service features to dynamically allocate the amount of cloud resources in your architecture. You can scale specific components in your architecture, and automatically increase the number of resources during demand spikes to maintain performance and decrease capacity during slow periods to reduce costs. The AWS best practice for demand-based resource allocation is Auto Scaling, which allows you to add and remove EC2 instances automatically according to rules you define using CloudWatch. You can also group instances to automate scaling for larger configurations. Auto Scaling is usually used with Elastic Load Balancing (ELB) to distribute incoming application traffic across multiple Amazon EC2 instances.
  2. Buffer-based approach. A buffer is an AWS mechanism that allows applications to communicate with each other when they are running at different rates over time. Buffer-based matching of supply and demand involves decoupling components of a cloud application and establishing a queue to accept workloads (called messages). The messages can be read by the host that will process the workload, and the host will process the request at the correct rate. For example, if a workload that generates significant write load doesn’t need to be processed immediately, you can use a buffer to smooth out demands on resources. Key AWS services that enable buffer-based capacity management are Amazon SQS and Amazon Kinesis, which simplify the separation of components of cloud applications. You can also use Spot instances to process heavy workloads on the fly; and Lambda to run serverless code for workloads without the cost of an instance.
  3. Time-based approach. Time-based matching of supply and demand involves aligning resource capacity to demand that is predictable over specified time periods. If you know when resources are going to be required, you can time your system to make the right resources available at the right time. With AWS, you can implement time-based resource allocation by timing your Auto Scaling. For example, for businesses that use 90 percent of a system’s capacity during business hours, you can time your resources to scale up at the beginning of the day and down at the end, ensuring that resources are available when needed and removed when demand drops. You can also use Cloud Formation to build templates that allow you to quickly create and provision AWS resources when needed.

Expenditure Awareness
You can’t manage what you can’t measure, and managing AWS costs requires visibility into what you’re spending across the entire organization. With different teams running different AWS resources, expenditure awareness can be a challenge. Expenditure awareness best practices on AWS involves several considerations, including:

  1. Stakeholders
  2. Visibility and controls
  3. Cost attribution
  4. Tagging
  5. Lifecycle tracking
  1. Stakeholders. In the cloud, infrastructure is not just an IT responsibility. The AWS cloud touches and can transform everything from research and development to customer service. AWS recommends that all relevant stakeholders within your organization be involved in expenditure discussions at all every phase of your architecture development. These include financial, operational, product development, IT, and third-party organizations.
  2. Visibility and controls. You need to be able to view your AWS costs at a level that is granular enough so that you know what you’re spending and can break down your costs, predict future costs and adjust as technology and market conditions indicate. AWS offers a free Cost Explorer tool that gives you a graphical view of a year and allows you to predict your spend for the coming months with a more than eighty percent confidence. AWS also offers a number of billing and cost reports to help you identify savings opportunities and prevent under-provisioning. The AWS Billing and Cost Management service can be used to create monthly budgets. You can create high-level reports or fine-grained reports that track the costs of every component of your system. To get even more granular, Detailed Billing Report with resources and tags and Cost and Usage Reports allow you to view hourly expenses. You can use CloudWatch to set cost-based alarms to notify you of red flag expense increases.
  3. Cost attribution. Cost attribution allows you to assign specific AWS costs to specific organizations to provide great accountability and to distribute cost optimization responsibilities. You can also link accounts and billing to specific groups defined by you. For example, you can create separate linked accounts by business unit (such as finance, marketing, and sales); by environment-lifecycle (such as development, test, and production); and by project; and use consolidated billing to aggregate these linked accounts.
  4. Tagging. Tagging allows you to organize usage and billing information around virtually any category. You assign a tag to AWS resources and then collect data and information about all resources with that tag. For example, if you want to see how much a specific application is costing your organization, you can tag the assets used by that application and run a report to discover the overall cost of the application. You can also use tags to perform resource management tasks at scale by listing resources with specific tags, then executing the appropriate actions. For example, if you are running a test, and tag all assets used with a “test” tag, you can automate the removal of these resources when the test is complete.
  5. Lifecycle tracking. One of the challenges of running a large AWS implementation is identifying resources that are no longer being used. AWS recommends using AWS Config, which continuously records configuration changes, to create a detailed inventory of your AWS resources. You can also use AWS CloudTrail and Amazon CloudWatch to automate the generation of records for resource lifecycle events.

Optimizing Over Time

AWS offers tools to help you continue to optimize your cloud architecture to ensure that you are never overspending. The pillar focuses on two approaches:

  1. Measure, monitor, and improve
  2. Staying evergreen
  1. Measure, monitor and improve. For larger AWS installations with multiple stakeholders, measuring, monitoring and improving cost optimization can be a challenge. AWS recommends forming a cost optimization team that meets regularly to “coordinate and manage all aspects of cost optimization, from your technical systems to your people and processes.” Another best practice is to define goals and metrics. For example, AWS recommends that all on-demand EC2 instances should be turned on and off each day, with 80-100 percent being an acceptable range. Resources that run 24/7 should probably use Reserved Instances. Analysis and reporting are critical for continuous improvement. The AWS Billing and Cost Management Dashboard (including Cost Explorer and Budgets), Amazon CloudWatch, and AWS Trusted Advisor are recommended to monitor and analyze usage, and to validate any implemented cost measures such as savings from reserved capacity.
  2. Staying evergreen. Staying evergreen means ensuring that you are always using the most up-to-date, cost-effective AWS resources. AWS is constantly improving efficiency, driving down costs and adding productivity tools. You can use AWS Trusted Advisor, a free tool that analyzes your AWS environment and reports opportunities to save money by eliminating unused or idle resources or committing to Reserved Instance capacity.

What is Trusted Advisor in AWS?

AWS Trusted Advisor is an application that inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps. Trusted Advisor draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers to automate improving your performance and reducing your spend.

The Cost Optimization Journey

AWS Cost optimization is a journey, not a destination. The cloud and AWS are constantly evolving, and using the tools and best practices of the Cost Optimization Pillar while help you develop processes that keep your cloud implementation as lean and effective a possible.

*This blog summarizes a more detailed AWS document, “Cost Optimization Pillar.”

Learn more about the other Well- Architected Framework pillars:

Need help optimizing your AWS spend?

Cloud computing requires continuous monitoring, analysis, and adjustment to ensure that you are not wasting resources and that the resources you are investing in are driving your business forward. Onica has helped hundreds of companies analyze their AWS services and pricing options to make sure that they are only paying for what they need – drastically reducing their monthly spend. Request a consultation today!

 

Hidden layer

Share on linkedin
Share on twitter
Share on facebook
Share on email

Onica Insights

Stay up to date with the latest perspectives, tips, and news directly to your inbox.

Explore More Cloud Insights from Onica

Blogs

The latest perspectives on navigating an ever-changing cloud landscape

Case Studies

Explore how our customers are driving cloud innovation in their industries

Videos

Watch an on-demand library of cloud tutorials, tips and tricks

Publications

Learn how to succeed in the cloud with deep-dives into pressing cloud topics