Archive | December, 2010

Cloud Escape Velocity – Switching Cloud Providers

18 Dec

The term Escape Velocity is the speed needed to “break free” from a gravitational field without further propulsion according to Wikipedia.org.  Data Gravity as explained in THIS previous post is what attracts and builds more Data, Applications, and Services on Clouds.  Data Gravity also is what creates a high level of Escape Velocity to move to another Cloud.

Some background on why this post is timely:

A few days ago Amazon announced a new AWS service for importing VMware disk images (VMDKs) into EC2.  VMware already offered a method for converting EC2 instances through their Converter tool into Workstation VMs and with a 2nd pass conversion into ESX VMs.  While all of this sounds wonderful and it does have value, it brings to light an entirely different issue.  Only Stateful / Fully encapsulated applications can be moved around in this way.

Examples of sources of Cloud Gravity (App, Service, and Data Gravity Combined) on your specific Application.

If someone selects a Cloud provider and writes an application leveraging anything more than a handful of VMs, Data Gravity will make it virtually impossible to move to a new/different Cloud provider.  Don’t believe it?

See the Diagrams Below:

Here is a diagram of an app that has a Low Escape Velocity because of Lower Cloud Gravity:

Cloud Escape Velocity with Low Gravity

Below is a diagram of an app that has a High Escape Velocity because of High Cloud Gravity:

Cloud Escape Velocity with High Gravity

Some potential dependencies include:

Database with a specific API

Web Worker which serves as a web interface and uses internal Authentication (Your user logins are here!)

Application code that uses the Database and Web Workers specific APIs and/or depends on Low Latency and High Throughput access to them.

Here are a few additional things to think about:

– The longer (more time) an Application stays in a specific Cloud the more difficult it is to move.  Why?  Data Gravity increases due to more Mass (data being stored).  Imagine accumulating 100’s of GBs of Data, how easy will it be to shuffle/transfer that much data around?

– The more provider APIs and Services that you depend on the harder it is to move.  Why?  Because there are only two paths that can be taken in a move.  The first is to find another provider that has the exact same set of APIs and Services (this will limit your choices).  The second is to change or rewrite your application to take advantage of the new Cloud provider’s APIs and/or Services.

– Different providers have different charges for the consumption of the same resources.  Your current provider gives free usage of queues for applications. The provider you are looking to go to charges after the first X number of messages on the queue.  Now what do you do?  You will either pay more when you move, rewrite your application to fit the new provider’s model, or pick another provider that has free queue usage.

– Different QoS guarantees from Cloud provider to Cloud provider.  Some Cloud providers offer SLAs with reimbursements for outages, others only offer best effort.  Some providers offer tiered Services, others only offer a single tier.  What happens if you want to move and you can’t get the minimum level of QoS that you need?

This is NOT an attempt to dissuade anyone from using Public Clouds (they are incredibly valuable and powerful), but I would like more people to go in eyes wide open.

Microsoft Azure Cloud – Top 20 Lessons Learned about MSFT’s PaaS

13 Dec

Two weeks ago, Rob Hirschfeld (@Zehicle) and I (@McCrory) had the benefit of intensive Azure training at Microsoft HQ to support Dell’s Azure Stamp.

We’ve assembled a top 20 list of things to know about programming for Azure (and really any PaaS leaning cloud):

  1. If you want performance, optimize to reduce fees. Azure (and any cloud) is architected to penalize you if you use their resources poorly. The challenge is to fix this before your boss get the tab for your unenlightened design decisions.
  2. Coding .NET on Azure easy, architecting for Azure requires learning. Clouds put things in different places than you are used to and the rules are different. Expect a learning curve.
  3. Partitioning = parallelism. Learn to love partitions in all their forms, because your app will be throttled if you throw everything into a single partition! On the upside, each partition operates in parallel and even better, they usually don’t cost extra (SQL is the exception).
  4. Roles are flexible. You can run web servers (Apache, etc) on a worker and worker tasks on a web role. This is a good way to save some change since you pay per role instance. It’s counter to separation of concerns, but financially you should also combine workers into a single role where possible.
  5. Understand walking deployments. You can (and should) have simultaneous versions of the code operating against the same data so that you can roll upgrades (ala Timothy Fitz/Eric Ries) to reduce risk and without reducing performance. You should expect your data schema to simultaneously span mutiple code versions.
  6. Learn about Update Domains (UDs). Deployment domains allow rolling upgrades and changes to Applications and Services. They are part of how you partition your overall application. If you’re planning a VIP swap deployment, then you won’t care.
  7. Each role = ONE external IP. You can have many VMs backing each role and Azure will load balance between them so you can scale out each role. Think of each role as a clonable entity: there will be at least 1 and more can be added if you want to scale.
  8. Understand between VIP and DIP. VIPs stand for Virtual IPs and are external, public, and metered. DIPs are internal, private, and load balanced. Azure provides an API to discover your DIPs – do not assume you know them because they are DYNAMIC IPs. Azure won’t let you see other DIPs inside the system.
  9. Azure has rich diagnostics, but beware. Azure leverages the existing diagnostics built into their system, but has to get the data off box since instances are volitile. This means that problems can be hard to isolate while excessive logging can impact performance and generate fees. Microsoft lets you target individual systems for elevated levels. You can also Terminal Server to a VM for troubleshooting (with caution).
  10. The new Azure admin console rocks. Take your pick between Silverlight or MMC Snap-in.
  11. Everything goes into Azure Storage. Learn to love it. Queues -> storage. Tables -> storage. Blobs -> storage. Logging -> storage. Code Repo -> storage. vDisk -> storage. SQL -> SQL (they march to their own drummer).
  12. Queues are essential, but tricky. Learn the meaning of idempotent because using queues requires you to handle failures and timeouts. The scary part is that it will work nicely until you exceed some limits and then you’ll experience cascading failure. Whee! Oh yea, and queues require polling (which stinks as a notification model).
  13. SQL Azure is just mostly like MS SQL. Microsoft did a smart thing in keeping Cloud SQL so it was highly compatible with Local SQL. The biggest note is that limited in size of partition. If you embrace the size limits you will get better performance. So stop pushing BLOBs into databases and start sharding.
  14. Duplicating data in tables will improve performance. This has to do with how partitions and keys operate but is an interesting architecture for NoSQL – stage data for use. Don’t be afraid to stage the same data in multiple ways. It may be faster/cheaper to write data twice if it becomes easier to find when you search it 1000s of times.
  15. Table data can be “warmed up.” Storage has logic that makes frequently accessed items faster (sort of like a cache ;). If you can anticipate load spikes then you should warm the data just before the spike.
  16. Storage billing is both amount and transactions. You can get burned on a small, but busy set of data. Note: you will pay even if you 404 a request.
  17. Azure has a CDN. Leveraging Microsoft’s Content Delivery Network (CDN) will improve performance for your users with small, low latency, high request items. You need to change your URLs for those assets. Best practice is to use some versioning in the URI so that you can force changes. Remember, CDN is SLOWER for the first hit when the data is not in cache so avoid CDN for low volume assets!
  18. Provisioning time is not instant. Azure needs anywhere from 1-3 minutes to spin a new instance of a role. Build this lag into your architecture and dynamic scale plans. New databases and partitions are fast.
  19. The VM Role is maintained by YOU. Using the VM role is a handy shortcut, but has a long list of gotcha’s. Some of note: 1) the VM can be “reset” to the last VM image state that you uploaded, 2) you are responsble for VM OS upgrades and patches, 3) VMs must be clonable because they will operate in parallel.
  20. Azure supports more than .NET. You can setup anything in a worker (and now VM) role, but there are nuances to doing this effectively. You really need to understand how Azure works and had better be ready to crack open Visual Studio for some things even if you’re writing in Java.

We hope this list helps you navigate Azure deployments. No matter what cloud you use, understanding Azure’s architecture will help you write better cloud scale applications.

We’d love to hear your suggestions and recommendations!

Mirrored on both blogs: Dave McCrory’s Blog & Rob Hirschfeld’s Blog

Data Gravity – in the Clouds

7 Dec

Today Salesforce.com announced Database.com at Dreamforce.  I realized that many could be wondering why they decided to do this and more so, why now?

The answer is Data Gravity.

Consider Data as if it were a Planet or other object with sufficient mass.  As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet.  As the mass or density increases, so does the strength of gravitational pull.  As things get closer to the mass, they accelerate toward the mass at an increasingly faster velocity.  Relating this analogy to Data is what is pictured below.

Data Gravity

Services and Applications can have their own Gravity, but Data is the most massive and dense, therefore it has the most gravity.  Data if large enough can be virtually impossible to move.
What accelerates Services and Applications to each other and to Data (the Gravity)?
Latency and Throughput, which act as the accelerators in continuing a stronger and stronger reliance or pull on each other.  This is the very reason that VMforce is so important to Salesforce’s long term strategy.  The diagram below shows the accelerant effect of Latency and Throughput, the assumption is that the closer you are (i.e. in the same facility) the higher the Throughput and lower the Latency to the Data and the more reliant those Applications and Services will become on Low Latency and High Throughput.
Note:  Latency and Throughput apply equally to both Applications and Services
How does this all relate back to Database.com?  If Salesforce.com can build a new Data Mass that is general purpose, but still close in locality to its other Data Masses and App/Service Properties, it will be able to grow its business and customer base that much more quickly.  It also enables VMforce to store data outside of the construct of ForceDB (Salesforce’s core database) enabling knew Adjacent Services with persistence.
The analogy holds with the comparison of your weight being different on one planet vs. another planet to that of services and applications (compute) having different weights depending on Data Gravity and what Data Mass(es) they are associated with.
Here is a 3D video depicting what I diagrammed at the beginning of the post in 2D.

 

More on Data Gravity soon (There is a formula in this somewhere)

Public Cloud Comparison and Calculator v2

5 Dec

After some time away from the Public Cloud Compute Comparison that I did a couple of months ago (which got X hits), I decided to update it based on feedback and new ideas.  What follows is a brief walkthrough with instructions on how to use the Calculator.

Before I go any further, a brief disclaimer:  I do not warrant the accuracy of this Comparison and Calculator, it may have errors and omissions (all unintentional if they exist).  I also take no responsibility if your bill turns out to be something very different than what the Calculator shows.  And finally, I’m employed by Dell, which has relationships with Microsoft, Joyent, and Amazon, and possibly the others and I’m just not aware.

First, let’s cover the updated Compute Comparison:

I’ve updated the Compute with Terremark as an additional provider, I would be interested in adding others if people are interested simply add a request by commenting at the bottom of this post.  Also added, is the ability to Calculate/Estimate your costs by using the add Quantities fields.  To use the Quantities, add the number of each type of Compute instance as you like and get a rough idea of what the cost will be.  Please note that there are assumptions, however for the most part I have annotated in the spreadsheet what those assumptions are.  Once you are happy with your compute instances, you can go down to the bottom and move to the new Cloud Storage Comparison and Calculator.

Cloud Compute Comparison and Calculator

Next, The Cloud Storage Comparison and Calculator:

It was quite an ordeal getting the tiered storage pricing models to work correctly as formulas, but it is all in there.  The Cloud Storage Comparison and Calculator attempts to cover the other side of the Cloud equation, by taking into account the following:

Monthly Persisten Storage Requirements

Data Transfer (In and Out of the Storage Cloud)

API Calls (In and Out bound requests)

Redundancy Costs

By entering some estimates across the top and choosing a quantity (this would usually be 1, which acts as a trigger to calculate Monthly Cost) you can easily get an idea of what your storage cost would be.

Cloud Storage Comparison and Calculator

And finally we have the Cost Summary Page.  This page combines the Total Monthly Cost from the Compute and Storage sheets into one place.

Cloud Comparison Summary Monthly Total Costs

To switch from the Cloud page to the Storage page and from the Storage page to the Summary page (or worksheet if you want to be precise), go to the bottom of the page an select (as shown below)

Tabs and Sheets

In the next post I will cover different results that came out as I used the Calculator.