Microsoft Azure offers nearly infinite scalability, virtually unlimited capacity, blazing performance and extremely quick provision times. However, to take advantage of these great benefits, teams need to plan ahead and understand all potential pitfalls and challenges. One of the more significant differences between development of on-premise applications and cloud applications is a rather direct correlation between choices made during construction of an application and its support costs after deployment. Because of the Microsoft Azure pricing model, every inefficient architectural decision and every inefficient line of code will show up as an extra line item on your Azure invoice.
The metrics that affect the monthly billing on Microsoft Azure are as listed below:
- Number of hours you’ve reserved a virtual machine (VM)—meaning you pay for a deployed application, even if it isn’t currently running
- Number of CPUs in a VM
- Bandwidth, measured per GB in / out
- Amount of GB storage used
- Number of transactions on storage
- Database size on Azure SQL Database
Limiting Virtual Machine Count
While limiting the amount of VMs you have running is a good way to save costs, for Web Roles, it makes sense to have at least two VMs for availability and load balancing. Use the Windows Azure Diagnostics API to measure CPU usage, the amount of HTTP requests and memory usage in these instances, and scale your application down when appropriate.
Every instance of whichever role running on Azure on a monthly basis doubles the amount of hours on the bill. For example, having three role instances running on average (sometimes two, sometimes four) will be 25 percent cheaper than running four instances all the time with almost no workload.
For Worker Roles, it also makes sense to have at least two role instances to do background processing. This will help ensure that when one is down for updates or a role restart occurs, the application is still available.
Microsoft Azure offers five sizes of VM: extra small(XS), small(S), medium(M), large(L) and extra large(XL). The differences between each size are the number of available CPUs, the amount of available memory and local storage, and the I/O performance. It’s best to think about the appropriate VM size before actually deploying to Azure. You can’t change it once you’re running your application.
1XL=2L=4M=8S
When you receive your monthly statement, you’ll notice all compute hours are converted into small instance hours when presented on your bill. For example, one clock hour of a medium compute instance would be presented as two small compute instance hours at the small instance rate. If you have two medium instances running, you’re billed for 720 hours x 2 x 2.
Consider this when sizing your VMs. You can realize almost the same compute power using small instances. Let’s say you have four of them billed at 720 hours x 4. That price is the same. You can scale down to two instances when appropriate, bringing you to 720 hours x 2. If you don’t need more CPUs, more memory or more storage, stick with small instances because they can scale on a more granular basis than larger instances.
When you’re deploying applications to staging or production and turning them off after use, don’t forget to un-deploy the application as well. You don’t want to pay for any inactive applications. Also remember to scale down when appropriate. This has a direct effect on monthly operational costs.
When scaling up and down, it’s best to have a role instance running for at least an hour because you pay by the hour. Spin up multiple worker threads in a Worker Role. That way a Worker Role can perform multiple tasks instead of just one. If you don’t need more CPUs, more memory or more storage, stick with small instances. And again, be sure to un-deploy your applications when you’re not using them.
Bandwidth, Storage and Transactions
Consider a scenario when you deploy applications over multiple Microsoft Azure regions. Let’s say you have a Web Role running in the “North America” region and a storage account in the “West Europe” region. In this situation, bandwidth for communication between the Web Role and storage will be billed.
If the Web Role and storage were located in the same region (both in “North America,” for example), there would be no bandwidth bill for communication between the Web Role and storage. Hence, when designing geographically distributed applications, it’s best to keep coupled services within the same Azure region.
When using the Microsoft Azure Content Delivery Network (CDN), you can take advantage of another interesting cost-reduction measure. CDN is metered in the same way as blob storage, meaning per GB stored per month. Once you initiate a request to the CDN, it will grab the original content from blob storage (including bandwidth consumption, thus billed) and cache it locally.
If you set your cache expiration headers too short, it will consume more bandwidth because the CDN cache will update itself more frequently. When cache expiration is set too long, Azure will store content in the CDN for a longer time and bill per GB stored per month. Think of this for every application so you can determine the best cache expiration time.
The Azure Diagnostics Monitor also uses blob storage for diagnostic data such as performance counters, trace logs, event logs and so on. It writes this data to your application on a pre-specified interval. Writing every minute will increase the transaction count on storage leading to extra costs. Setting it to an interval such as every 15 minutes will result in fewer storage transactions. The drawback to that, however, is the diagnostics data is always at least 15 minutes old.
Also, the Azure Diagnostics Monitor doesn’t clean up its data. If you don’t do this yourself, there’s a chance you’ll be billed for a lot of storage containing nothing but old, expired diagnostic data.
Transactions are billed per 10.000. This may seem like a high number, but you’ll pay for them, in reality. Every operation on a storage account is a transaction. Creating a blob container, listing the contents of a blob container, storing data in a table on table storage, peeking for messages in a queue—these are all transactions. When performing an operation such as blob storage, for example, you would first check if the blob container exists. If not, you would have to create it and then store a blob. That’s at least two, possibly three, transactions.
The same counts for hosting static content on blob storage. If your Web site hosts 40 small images on one page, this means 40 transactions. This can add up quickly with high-traffic applications. By simply ensuring a blob container exists at application startup and skipping that check on every subsequent operation, you can cut the number of transactions by almost 50 percent. Be smart about this and you can lower your bill.
Indexes Can Be Expensive
Azure SQL Database is an interesting product. You can have a database of 1GB, 5GB, 10GB, 20GB, 30GB, 40GB, or 50GB at an extremely low monthly price.
In some situations, it can be more cost-effective to distribute your data across different Azure SQL databases, rather than having one large database. For example, you could have a 5GB and a 10GB database, instead of a 20GB database with 5GB of unused capacity. This type of strategic storage will affect your bill if you do it smartly, and if it works with your data type.
Every object consumes storage. Indexes and tables can consume a lot of database storage capacity. Large tables may occupy 10 percent of a database, and some indexes may consume 0.5 percent of a database.
If you divide the monthly cost of your Azure SQL database subscription by the database size, you’ll have the cost-per-storage unit. Think about the objects in your database. If index X costs you 50 cents per month and doesn’t really add a lot of performance gain, then simply throw it away. Half a dollar is not that much, but if you eliminate some tables and some indexes, it can add up.
There is a strong movement in application development to no longer use stored procedures in a database. Instead, the trend is to use object-relational mappers and perform a lot of calculations on data in the application logic.
There’s nothing wrong with that, but it does get interesting when you think about Microsoft Azure and Azure SQL Database. Performing data calculations in the application may require extra Web Role or Worker Role instances. If you move these calculations to Azure SQL Database, you’re saving on a role instance in this situation. Because Azure SQL Database is metered on storage and not CPU usage, you actually get free CPU cycles in your database.
Developer Impact
The developer who’s writing the code can have a direct impact on costs. For example, when building an ASP.NET Web site that Microsoft Azure will host, you can distribute across role instances using the Azure storage-backed session state provider. This provider stores session data in the Azure table service where the amount of storage used, the amount of bandwidth used and the transaction count are measured for billing. Consider the following code snippet that’s used for determining a user’s language on every request:
if (Session[“culture”].ToString() == “en-US”) {
// .. set to English …
}
if (Session[“culture”].ToString() == “nl-BE”) {
// .. set to Dutch …
}
Nothing wrong with that? Technically not, but you can optimize this by 50 percent from a cost perspective:
string culture = Session[“culture”].ToString();
if (culture == “en-US”) {
// .. set to English …
}
if (culture == “nl-BE”) {
// .. set to Dutch …
}
Both snippets do exactly the same thing. The first snippet reads session data twice, while the latter reads session data only once. This means a 50 percent cost win in bandwidth and transaction count. The same is true for queues. Reading one message at a time 20 times will be more expensive than reading 20 messages at once.
Sources:
This article was written using the following resources:
- Cost Architecting for Windows Azure by Maarten Balliauw-TechNet Magazine (http://technet.microsoft.com/)
- Five tips for creating cost effective Windows Azure applications by Igor Papirov(http://blog.paraleap.com/)