In this two-part series we’ll talk about a subject which we get asked about quite often: “How do I reduce my Amazon EC2 costs?”
Firstly, it’s important to get your Amazon Elastic Compute Cloud (EC2) discount strategy right. We approach any cost reductions with our 80/20 method, focusing on where the quick wins are, and the discount strategy is always at the top of that list. It’s important to make sure that you have a clear long term strategy for savings plans and reserved instances, and use Spot instances and AWS Batch where appropriate. But let’s focus on the discount strategy at this point.
It’s best to have a mix of both savings plans and reserved instances. Compute savings plans should form the bedrock of your discount strategy. They provide a flexible route in terms of where discounts apply, which is almost everything in your EC2 platform from a compute perspective. They also cover across regions, as well as instance families. This means savings plans are a good way of providing a base level of discount. If you layer on top of that convertible reserved instances, you can then get an additional layer of flexibility, which allows term optimization, something you just don’t get with savings plans.
This means that you can use the unique features of convertible reserved instances to squash and expand the commitment you’ve made to AWS. Using that capability along with an advanced purchasing and forward posting strategy you can take advantage of three-year discounts that AWS provides, which are double the discount for one year. You then can create short term additional commitments when there are spikes in usage, or can create reductions in commitments usage drops. So it’s important to get that discount strategy correct. This should form the basis of your AWS cost reduction plan. To learn more about creating an effective discount strategy check out our article on how to use AWS Reserved Instances in the post-Savings Plan World.
Let’s talk about EC2 Spot instances for a moment. Spot is important if your applications can support it. Discounts of up to 90% are available with Spot, and if you automate them with services like AWS Batch then you can have a very effective way of having a low-cost platform on AWS.
A challenge with Spot instances is that they can just disappear at very short notice, so make sure that your application is able to spin up new instances when a Spot instance disappears, and also that the integrity is handled.
If you are using legacy apps that are not stateless, then they are not for Spot. But if your application has the ability to be more cloud-aware and more cloud-native, then Spot instances can be a great way of saving costs.
As of the 24th September you can now easily calculate your actual past savings from using Spot as CUR files will now include Spot usage pricing, and the corresponding On-Demand EC2 instance pricing.
The second key method for reducing Amazon EC2 costs is right-sizing, which takes its form in several ways. Start by getting good relevant data showing your CPU and memory usage. This data is really important to start working out your right sizing plan. It’s important to see a history of at least 30 days, and if you’ve got long-term historical data, then that can be valuable for any spikes or peaks that you might see from a seasonal perspective.
Look at different metrics for your instances to understand how they are performing at the moment, ex. max CPU utilization and average max, which is useful in understanding where the peaks are occurring. You also need to look at RAM utilization, although this tends to be less volatile, and often the tools to collect RAM usage are not implemented in AWS environments, since it is not a standard feature like CPU usage. If you have these metrics, then it’s important to establish a safety margin that you won’t plan for the average max to exceed. Typically we’ll look at 80% in terms of CPU usage and you don’t want it peaking and spiking regularly above that 80% level. Otherwise, a real spike above that can result in downtime or serious issues around performance, and when you right size you never want to impact performance. That’s an essential criteria that you need to set. Ensure that you have the tools to enable you to collect this data and then view it, as well identifying where the greatest opportunities are – ideally you’ll be able to rank the right size recommendations by the potential for savings.
Again, we recommend approaching it from an 80/20 perspective. If you have a large number of instances, focus on the top 20% that are typically going to deliver 80% of the results. In some cases you might find that it’s the top 10% that deliver 90% of the results, but the principle is the same, once you’ve tackled and hit those high potential instances first, then you can go on to the long tail over a longer period of time without such a high degree of urgency around it.
Another thing to consider on right sizing from a CPU perspective is that burstable instances can be an incredibly valuable tool in your arsenal. It’s important to consider when you’re looking at the CPU usage that if it’s relatively low, and maybe just spiky, then you may well find burstable instances are very applicable to you.
There are a number of rules that you want to follow around burstable, familiarizing yourself with how credits work and so forth. Ensure that your data dashboarding tool is going to show you what the burstable instance charges would be so that you can make informed decisions around burstable.
You can make significant savings using burstable instances due to the fact you don’t have to worry about testing and changing different instance types, because they use the same architecture and they are simply charged differently based upon usage credits.
Consider all of the different instance families carefully, AWS releases new instance families regularly that bring the cost down. The latest instance family is nearly always going to be cheaper than the instance family before because AWS have a more cost-effective platform in place and they like to encourage clients to move to the latest platform. This means that simply moving through the latest revision of an instance family can be quite a significant cost saving. You need to make sure that your instance families are tested for your application, but if your application runs well on the newer, cheaper instance types, then you can make a big saving just by changing the instance family and keeping the CPU levels the same.
The other thing on right-sizing to think about carefully is your storage. Often there are EBS volumes attached, which are either the wrong type or they’re too big and you can save a significant amount just by changing your EBS volume type.
Do you need high data transfers? Are you on the latest volume types? All these things make a big difference to your storage. And of course, if you’ve oversized the storage to begin with when you’ve set up an instance, then often it’s worth going back to check what level you actually need that storage to be at, and then shrink it down at a later date. So having a process in place, once you’ve created an initial volume, to go back and review that, is important to right sizing.
That concludes part 1 of our series on How to Reduce Amazon EC2 costs. Look out for our next post where we’ll talk about data transfer charges, autoscaling, data and more.
In the meantime, if you’d like to learn what specific steps to take to lower your own organisation’s EC2 spend, and that of other AWS resources, then book a free call with us here.