The title for this blog post is a direct reference to Latency Numbers Every Programmer Should Know. There are several versions of those numbers available now, and I could not find the original author with certainty. Some people attribute the original numbers to Jeff Dean.

When working on a project that will reach a certain scale, you need to balance several concerns. What assumptions am I making and how do I confirm them? How can I get to market quickly? Will my design support the expected scale?

One of the issues associated with scale, is the cost of your infrastructure. Cloud providers allow you to provisions thousands of CPUs and store terabytes of data at the snap of a finger. But that comes at a cost, and what is negligible for a few thousand users might become a burning hole in your budget when you reach millions of users.

In this article, I’m going to list reference numbers I find useful to keep in mind when considering an architecture. Those numbers are not meant for accurate budget estimation. They’re here to help you decide if your design makes sense or is beyond what you could ever afford. So consider the orders of magnitude and relative values rather than the absolute values.

Also consider that your company may get discounts from AWS, and those can make a massive difference.

Compute

What’s the cost of a CPU these days? I used the wonderful ec2instances.info interface to extract the median price of a vCPU.

You can get the source data out of their Github repo. I copied it and processed it using a python script that you can find on Github. All prices are for the eu-west-1 region.

  Median monthly cost
1 modern vCPU (4 AWS ECUs) 58 $/month
With 1 year convertible reservation (all up front) 43 $/month
With 3 years convertible reservation (all up front) 30 $/month
With spot pricing (estimated) 30 $/month


I estimated spot pricing based on anecdotal data I got from various sources. As the prices vary within a day and I could not find a reliable data source for it.

AWS represents the computing power of its machines in Elastic Compute Units, and 4 ECUs represent more or less the power of a modern CPU. So the prices above are for one CPU or core, not one instance.

Here’s the price of 1 ECU in $ per hour across all instance types I looked at:

Price of 1 ECU in $ per hour

And here’s how on-demand compares with 1 year and 3 year reservations (both convertible, upfront payments):

How many more reserved instances can you pay for the same price as on-demand

Storage

So you want low latency, high throughput and are planning to store everything in Redis? Then on top of those CPU costs, you’ll need to pay for RAM.

I used the same approach to extract the median price of 1GB of RAM on EC2. Elasticache is more or less twice as expensive for on-demand but prices drop quite quickly when looking at reserved instances.

  Median monthly cost
1 GB RAM 10 $/month
1 GB RAM 1 year convertible reservation (all up front) 8 $/month
1 GB RAM 3 years convertible reservation (all up front) 5 $/month
SSD 0.11 $/month
Hard Disk 0.05 $/month
S3 0.02 $/month
S3 Glacier 0.004 $/month

While this is the pure storage cost, you also need to look at the usage patterns for your data. How much CPU will you need to run that in-memory database 24/7?

Same for S3: how much will you pay in writing/reading requests? I’ve seen workloads where the storage cost on S3 was negligible but the cost of writing a lot of objects in S3 made the team write their own filesystem on top of S3.

Bandwidth

A few comments on HackerNews pointed out that I left the bandwidth costs out. Indeed if you are serving data to end users, or need cross-region replication, you need to look into those costs as well.

Type of data transfer Cost of transferring 1GB
EU/US region to any other region 0.02 $/GB
APAC region to any other region 0.09 $/GB
EU/US region to Internet 0.05 $/GB
APAC region to Internet 0.08 $/GB
Between two AZs in the same region 0.01 $/GB
Inside the same AZ Free