Archive | IaaS – Infrastructure as a Service RSS feed for this section

Defying Data Gravity

2 Apr

How to Defy Data Gravity

Since I have changed companies, I have been incredibly busy as of late and my blog has had the appearance of neglect.  At a minimum I was trying to do a post or two per week.  The tempo will be changing soon to move closer to this….

As a first taste of what will be coming in a couple of weeks I thought I would talk a bit about something I have been thinking a great deal about.

Is it possible to defy Data Gravity?

First a quick review of Data Gravity:

Data Gravity is a theory around which data has mass.  As data (mass) accumulates, it begins to have gravity.  This Data Gravity pulls services and applications closer to the data.  This attraction (gravitational force) is caused by the need for services and applications to have higher bandwidth and/or lower latency access to the data.

Defying Data Gravity, how?

After considering how this might be possible, I believe that the following strategies/approaches could make it feasible to come close to Defying Data Gravity.

All of the bullets below could be leveraged to assist in defying Data Gravity, however they all have both pros and cons.  The strengths of some of the patterns and technologies can be weaknesses of others, which is why they are often combined in highly available and scalable solutions.

All of the patterns below provide an abstraction or transformation of some type to either the data or the network:

  • Load Balancing : Abstracts Clients from Services, Systems, and Networks from each other
  • CDNs : Abstract Data from it’s root source to Network Edges
  • Queueing (Messaging or otherwise) : Abstracts System and Network Latency
  • Proxying : Abstracts Systems from Services (and vice versa)
  • Caching : Abstracts Data Latency
  • Replication : Abstracts Single Source of Data (Multiplies the  Data i.e. Geo-Rep or Clustering)
  • Statelessness : Abstracts Logic from Data Dependencies
  • Sessionless : Abstracts the Client
  • Compression (Data/Indexing/MapReduce) : Abstracts (Reduces) the Data Size
  • Eventual Consistency : Abstracts Transactional Consistency (Reduces chances of running into Speed of Light problems i.e. Locking)

So to make this work, we have to fake the location and presence of the data to make our services and applications appear to have all of the data beneath them locally.  While this isn’t a perfect answer, it does give the ability to move less of the data around and still give reasonable performance.  Using the above patterns allows for the movement of an Application and potentially the services and data it relies on from one place to another – potentially having the effect of Defying Data Gravity.  It is important to realize that the stronger the gravitational pull and the Service Energy around the data, the less effective any of these methods will be.

Why is Defying Data Gravity so hard?

The speed of light is the answer.  You can only shuffle data around so quickly, even using the fastest networks, you are still bound by distance, bandwidth, and latency.  All of these are bound by time, which brings us back to the speed of light.  You can only transfer so much data across the distance of your network, so quickly (in a perfect world, the speed of light becomes the limitation).

The many methods explained here are simply a pathway to portability, but without standard services, platforms, and the like even with the patterns etc. it becomes impossible to move an Application, Service, or Workload outside of the boundaries of its present location.

A Final Note…

There are two ways to truly Defy Data Gravity (neither of which is very practical):

Store all of your Data Locally with each user and make them responsible for their Data

If you want to move, be willing to accept downtime (this could be minutes to months) and simply store off all of your data and ship it somewhere else.  This method would work now matter how large the data set as long as you don’t care about being down.

CLASH – CLoud Admin SHell

11 Jan

It has been several weeks since I have posted to this blog.  I would blame this on the holidays, but that would be inaccurate as it has been something far more insidious!

What is CLASH?
CLASH is a universal shell.  What is a universal shell?  First, by universal I mean that it is intended to run on all major desktop operating system platforms including:
Windows XP, Windows Vista, Windows 7

Mac OSX Leopard, Mac OSX Snow Leopard
Ubuntu Linux, and most other flavors of Linux
——————————–
Now the shell portion is a bit different.  Since this is a CLoud Admin SHell, Cloud is an important part of idea.  In this initial prototype I went through many experiments in learning the nuances of JRuby <follow on post link here>, but have been able to get a reasonable working version of VIJava (the interface I’m using to get JRuby to work with VMware vCenter/vSphere).

Great, so what can I do with this prototype?

-You can try it out on any of the platforms listed above!
-If you are on a Windows System, you can either edit the clash.bat file and put in your server’s IP or name along with a username and password to connect
-If you are on Mac or Linux, you can simply type ./clash –Server 127.0.0.1 –Username administrator –Password password to connect 

-The following are the working commands:
> Get-VM SomeVMName
Above gets the VM object and prints out the Guest Operating System type

> $result
This command prints the name of the VM object
> $result $
This displays the methods available to the VM object
> $result $.get
This shows only the methods with “get” in them
> $result $.set
This shows only the methods with “set” in them
> $result $.find something
This shows only them methods that contain “something” in them
> $result method
This will execute the method against the object (i.e. $result getName will display the name property of the VM object from Get-VM)
> disconnect
This will cleanly disconnect from the vCenter / vSphere server
> Start-VM SomeVMName
This command is flakey at the moment, this will be fixed when the next prototype is released in a few weeks
> Stop-VM SomeVMName
This is in the same state as Start-VM
> Get-VM SomeVMName > $result getName
This allows a limited form of piping in clash by using the > as an operator (you must have whitespace on both sides of the > symbol.

Where is this headed?
After many experiments, it will be broken out into a flexible system that allows many different options and cool capabilities against not only vCenter and vSphere, but most Cloud platforms as well.  Currently planned platforms include:
-Amazon EC2 and S3
-Rackspace
-(Looking for the next provider for this list)

Other interfaces to the shell (for both input and output) will include a Web interface, I’m looking for thoughts on other types of interfaces desired.

How will this work?
Below is my latest planned diagram for how I hope/think things should work:

Where can I get the Prototype?
You can get the code from GitHub here
You can download the entire package here

Installation Directions (AGAIN, this is a PROTOTYPE, it does NOT follow best practices)
1.) Make sure that you have Java 1.5 or Above Installed
Download Java from Here

2.) Install JRuby 1.5.6
Download JRuby from Here
3.) Follow the JRuby Setup Instructions Here
4.) Download the CLASH prototype/alpha-1 from GitHub (See Above Links)
Unzip/Tar it to c:\clash or /clash directory in ROOT
5.) Start clash by going to c:\clash\bin\ or /clash/bin
On Windows edit clash.bat to contain the correct IP Address, Username, and Password
then run clash.bat
On Mac and Linux type ./clash –Server 127.0.0.1 –Username administrator –Password password
Make sure to select the IP or Name of a valid vCenter / vSphere Server
6.) Play with clash
A Request:
Please supply feedback to me through comments to this post or communicate directly with me through Twitter – my handle is @mccrory
I’m looking for ideas/things that you would like to see clash do, better commands, capabilities, features, etc.

Cloud Escape Velocity – Switching Cloud Providers

18 Dec

The term Escape Velocity is the speed needed to “break free” from a gravitational field without further propulsion according to Wikipedia.org.  Data Gravity as explained in THIS previous post is what attracts and builds more Data, Applications, and Services on Clouds.  Data Gravity also is what creates a high level of Escape Velocity to move to another Cloud.

Some background on why this post is timely:

A few days ago Amazon announced a new AWS service for importing VMware disk images (VMDKs) into EC2.  VMware already offered a method for converting EC2 instances through their Converter tool into Workstation VMs and with a 2nd pass conversion into ESX VMs.  While all of this sounds wonderful and it does have value, it brings to light an entirely different issue.  Only Stateful / Fully encapsulated applications can be moved around in this way.

Examples of sources of Cloud Gravity (App, Service, and Data Gravity Combined) on your specific Application.

If someone selects a Cloud provider and writes an application leveraging anything more than a handful of VMs, Data Gravity will make it virtually impossible to move to a new/different Cloud provider.  Don’t believe it?

See the Diagrams Below:

Here is a diagram of an app that has a Low Escape Velocity because of Lower Cloud Gravity:

Cloud Escape Velocity with Low Gravity

Below is a diagram of an app that has a High Escape Velocity because of High Cloud Gravity:

Cloud Escape Velocity with High Gravity

Some potential dependencies include:

Database with a specific API

Web Worker which serves as a web interface and uses internal Authentication (Your user logins are here!)

Application code that uses the Database and Web Workers specific APIs and/or depends on Low Latency and High Throughput access to them.

Here are a few additional things to think about:

- The longer (more time) an Application stays in a specific Cloud the more difficult it is to move.  Why?  Data Gravity increases due to more Mass (data being stored).  Imagine accumulating 100′s of GBs of Data, how easy will it be to shuffle/transfer that much data around?

- The more provider APIs and Services that you depend on the harder it is to move.  Why?  Because there are only two paths that can be taken in a move.  The first is to find another provider that has the exact same set of APIs and Services (this will limit your choices).  The second is to change or rewrite your application to take advantage of the new Cloud provider’s APIs and/or Services.

- Different providers have different charges for the consumption of the same resources.  Your current provider gives free usage of queues for applications. The provider you are looking to go to charges after the first X number of messages on the queue.  Now what do you do?  You will either pay more when you move, rewrite your application to fit the new provider’s model, or pick another provider that has free queue usage.

- Different QoS guarantees from Cloud provider to Cloud provider.  Some Cloud providers offer SLAs with reimbursements for outages, others only offer best effort.  Some providers offer tiered Services, others only offer a single tier.  What happens if you want to move and you can’t get the minimum level of QoS that you need?

This is NOT an attempt to dissuade anyone from using Public Clouds (they are incredibly valuable and powerful), but I would like more people to go in eyes wide open.

Data Gravity – in the Clouds

7 Dec

Today Salesforce.com announced Database.com at Dreamforce.  I realized that many could be wondering why they decided to do this and more so, why now?

The answer is Data Gravity.

Consider Data as if it were a Planet or other object with sufficient mass.  As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet.  As the mass or density increases, so does the strength of gravitational pull.  As things get closer to the mass, they accelerate toward the mass at an increasingly faster velocity.  Relating this analogy to Data is what is pictured below.

Data Gravity

Services and Applications can have their own Gravity, but Data is the most massive and dense, therefore it has the most gravity.  Data if large enough can be virtually impossible to move.
What accelerates Services and Applications to each other and to Data (the Gravity)?
Latency and Throughput, which act as the accelerators in continuing a stronger and stronger reliance or pull on each other.  This is the very reason that VMforce is so important to Salesforce’s long term strategy.  The diagram below shows the accelerant effect of Latency and Throughput, the assumption is that the closer you are (i.e. in the same facility) the higher the Throughput and lower the Latency to the Data and the more reliant those Applications and Services will become on Low Latency and High Throughput.
Note:  Latency and Throughput apply equally to both Applications and Services
How does this all relate back to Database.com?  If Salesforce.com can build a new Data Mass that is general purpose, but still close in locality to its other Data Masses and App/Service Properties, it will be able to grow its business and customer base that much more quickly.  It also enables VMforce to store data outside of the construct of ForceDB (Salesforce’s core database) enabling knew Adjacent Services with persistence.
The analogy holds with the comparison of your weight being different on one planet vs. another planet to that of services and applications (compute) having different weights depending on Data Gravity and what Data Mass(es) they are associated with.
Here is a 3D video depicting what I diagrammed at the beginning of the post in 2D.

 

More on Data Gravity soon (There is a formula in this somewhere)

Public Cloud Comparison and Calculator v2

5 Dec

After some time away from the Public Cloud Compute Comparison that I did a couple of months ago (which got X hits), I decided to update it based on feedback and new ideas.  What follows is a brief walkthrough with instructions on how to use the Calculator.

Before I go any further, a brief disclaimer:  I do not warrant the accuracy of this Comparison and Calculator, it may have errors and omissions (all unintentional if they exist).  I also take no responsibility if your bill turns out to be something very different than what the Calculator shows.  And finally, I’m employed by Dell, which has relationships with Microsoft, Joyent, and Amazon, and possibly the others and I’m just not aware.

First, let’s cover the updated Compute Comparison:

I’ve updated the Compute with Terremark as an additional provider, I would be interested in adding others if people are interested simply add a request by commenting at the bottom of this post.  Also added, is the ability to Calculate/Estimate your costs by using the add Quantities fields.  To use the Quantities, add the number of each type of Compute instance as you like and get a rough idea of what the cost will be.  Please note that there are assumptions, however for the most part I have annotated in the spreadsheet what those assumptions are.  Once you are happy with your compute instances, you can go down to the bottom and move to the new Cloud Storage Comparison and Calculator.

Cloud Compute Comparison and Calculator

Next, The Cloud Storage Comparison and Calculator:

It was quite an ordeal getting the tiered storage pricing models to work correctly as formulas, but it is all in there.  The Cloud Storage Comparison and Calculator attempts to cover the other side of the Cloud equation, by taking into account the following:

Monthly Persisten Storage Requirements

Data Transfer (In and Out of the Storage Cloud)

API Calls (In and Out bound requests)

Redundancy Costs

By entering some estimates across the top and choosing a quantity (this would usually be 1, which acts as a trigger to calculate Monthly Cost) you can easily get an idea of what your storage cost would be.

Cloud Storage Comparison and Calculator

And finally we have the Cost Summary Page.  This page combines the Total Monthly Cost from the Compute and Storage sheets into one place.

Cloud Comparison Summary Monthly Total Costs

To switch from the Cloud page to the Storage page and from the Storage page to the Summary page (or worksheet if you want to be precise), go to the bottom of the page an select (as shown below)

Tabs and Sheets

In the next post I will cover different results that came out as I used the Calculator.

Walk-through of the VMforce / Cloud OS / OpenPaaS Demo

13 Nov

This post attempts to walk-through the demo that was shown at the Ruby Conference.  I was not actually at the conference, but I am reconstructing what happened based on materials and information that was tweeted and the presentation materials.

Diagram of VMware Cloud OS - PaaS - VMforce Demo CLI Walk-through

The walk-through above shows a sophisticated PaaS layer (reminding me of the Google AppEngine PaaS) where code is uploaded/pushed and then inspected and compiled.  The resulting “App Instance” (also referred to as a “Slug” in Heroku terms) is ready to respond to requests.  Can’t service enough requests with just one App Instance?  Type “vmc instances fu 5″ and instead of a single App Instance, the App Instance clones/copies/ Scales Out to 5 instances.

Need to Scale Down from 5 App Instances to 3 because demand has fallen?  “vmc instances fu 3″ Scales Down the App Instances from 5 to 3 killing the last 2 App Instances – 3 and 4 (note that instances are numbered 0-4).

 

Diagram of VMware Cloud OS - PaaS - VMforce Demo CLI Walk-through - 2

Now the requests have fallen to the point where there is only 1 App Instance needed, so the command “vmc instances fu 1″ shrinks (Scales Down) the App Instances from 3 to 1.  Now we look to see where the fu Application actually is resource utilization wise (This is the point where we see the overlap with the underlying VM) by typing “vmc stats fu”

This allows a Developer on a Public Cloud – in the above case vCloudLabs.com is hosted by Terremark (using vCloud if I had to guess) to deploy their code with little to NO knowledge of the underlying Infrastructure beneath (IaaS).  This ultimately mirrors the functionality of what VMware’s Cloud OS will provide to  Private Clouds and the Enterprise Development groups inside them.  What is demoed is supposed to be the same as the system was designed for VMforce in Salesforce.com ‘s Data Center as well.

Where most Enterprise IT Architectures are today

4 Nov

Most Enterprises are architecturally in a rigid and fragile state.  This has been caused by years of legacy practices in support of poor code, design patterns, underpowered hardware (which focused on increasing MHz not parallelism/multi-cores).  What follows is a brief review of what has led us here and is needed background for the follow-on post which exercises a theory that I’m testing.


Architecture Phase 1 – How SQL and ACID took us down a path
Early on in the Client/Server days even low power x86 servers were expensive. These servers would have an entire stack of software put on them (i.e. DB and Application functions with Clients connecting to the App and accessing the DB). This architecture made the DB the most critical component of the system. The DB needed to ALWAYS be online and needed to have the most rigid transactional consistency possible. This architecture forced a series of processes to be put in place and underlying hardware designs to evolve in support of this architecture.
This legacy brought us the following hardware solutions:

RAID 1 (Disk Mirroring) -> Multi-pathed HBAs connecting SANs with even more Redundancy

Two NIC Cards -> NICs teamed to separate physical Network Switches

Memory Parity -> Mirrored Memory

Multi-Sockets -> FT Based in Lock Step CPUs

All of this was designed to GUARANTEE both Availability and Consistency.
Having Consistency and Availability is expensive and complicated.  This also does not take into account ANY Partition tolerance.  (See my Cap Theorem post)

 

 

 

 

 

 

 

 

 

 

 

 

Architecture Phase 2 – The Web
Web based architectures in the enterprise contributed to progress with a 3-Tier model where we separated the Web, Application, and Database functionality into separate physical systems. We did this because it made sense. How can you scale a system that has a Web, Application, and Database residing on it? You can’t, so first you break it out and run many web servers with a load balancer in front. Next you get a big powerful server for the Application tier and another (possibly even more highly redundant than the Application tier server) for the Database. All, set right? This is the most common architecture in the enterprise today. It is expensive to implement, expensive to manage, and expensive to maintain, but it is the legacy that developers have given IT to support.  The benefit being that there is better scalability and flexibility with this model and with adding virtualization (which helps further the life of this architecture).

Where is Virtualization in all of this?
Virtualization is the closest Phase 2 could ever really get to the future (aka Phase 3, which is covered in my next post). Virtualization breaks the bond with the physical machines, but not the applications (and their architectures) that are running on top. This is why IT administrators have had such a need for capabilities in products like VMware ESX in conjunction with VMware vSphere like HA (High Availability), DRS (Distributed Resource Scheduling), and FT (Fault Tolerance). These things are required when you are attempting to keep a system up as close to 100% as possible.

Today

The trend toward Cloud architectures is forcing changes in development practices and coding/application design philosophies.  Cloud architectures are also demanding changes in IT operations and the resulting business needs are creating pressures for capabilities that current/modern IT Architectures can’t provide.

This leads us to what is coming….

CAP Theorem and Clouds

3 Nov

A background on CAP Theorem:

CAP Theorem is firmly anchored in the SOA (Service Oriented Architecture) movement and is showing promise as a way of classifying different types of Cloud Solution Architectures.  What follows is an explanation about CAP Theorem, how it works, and why it is so relevant to anyone looking at Clouds (Public, Private, Hybrid, or otherwise).

Distributed Systems Theory – The CAP Theorem:
CAP Theorem was first mentioned by Eric Brewer in 2000 (CTO of Inktomi at the time) and was proven 2 years later.  CAP stands for Consistency, Availability, and Partitioning tolerance.  CAP Theory states that you can only have TWO of the three capabilities in a system.  So you can have Consistency and Availability, but then you don’t have Partitioning tolerance.  You could have Availability and Partitioning tolerance without rigid Consistency.  Finally you could have Consistency and Partitioning tolerance without Availability.

The KEY assumption is that the system needs to persist data and/or has state of some type, if you don’t need either Data persistence or State ANYWHERE, you can get very close to having Consistence, Availability, and Partitioning simultaneously.

Understanding Consistency, Availability, and Partitioning:

Consistency is a system’s ability to maintain ACID properties of transactions (a common characteristic of modern RDBMS). Another way to think about this is how strict or rigid the system is about maintaining the integrity of reads/writes and ensuring there are no conflicts.  In an RDBMS this is done through some type of locking.
Availability is system’s ability to sucessfully respond to ALL requests made.  Think of data or state information split between two machines, a request is made and machine 1 has some of the data and machine 2 has the rest of the data, if either machine goes down not ALL requests can be fulfilled, because not all of the data or state information is available entirely on either machine.
Partitioning is the ability of a system to gracefully handle Network Partitioning events.  A Network Partitioning event occurs when a system is no longer accessible (Think of a network connection failing). A different way of considering Partitioning tolerance is to think of it as message passing.  If an individual system can no longer send/receive messages to/from other systems, it has been effectively “partitioned” out of the network.
A great deal of discussion has occurred over Partitioning and some have argued that it should be instead referred to as Latency.  The idea being that if Latency is high enough, then even if an individual system is able to respond, the individual system will be treated by other systems as if it has been partitioned.
In Practice:

CAp – Think of a traditional Relational Database (i.e. MS SQL, DB2, Oracle 11g, Postgres), if any of these systems lose their connection or experience high latency they can not service all requests and therfore are NOT Partitioning tolerant (There are ways to solve this problem, but none are perfect)
cAP -  A NOSQL Store (i.e. Cassandra, MongoDB, Voldemort), these systems are highly resilient to Network Partitioning (assuming that you have several servers supporting any of these systems) and they offer Availbility.  This is achieved by giving up a certain amount of Consistency, these solutions follow an Eventual Consistency model.
CaP – This isn’t an attractive option, as your system will not always be available and wouldn’t be incredibly useful in an Cloud environment at least.  An example would be a system that if one of the nodes fails, other nodes can’t respond to requests.  Think of a solution that has a head-end where if the head-end fails, it takes down all of the nodes with it.
A Balancing Act
When CAP Theorem is put into practice, it is more of a balancing act where you will not truly leave out C,A, or P.  It is a matter of which two of the three the system is closest to (as seen below).
NOTE:  Post to follow tying this more closely to the Cloud coming tomorrow.
In my research I came across a number of great references on CAP Theorem (In order most favored order):

The Real Path to Clouds

2 Nov

I’ve been spending a great deal of time as of late researching the background and roots of Cloud Computing in an effort to fully understand it. The goal behind this was to understand what Cloud computing is at all levels, and is quite a tall order. I think I have it figured out and am now looking for the community’s feedback to vet and fully mature my theory.

First a brief review of CAP Theorem, which states that all implemented systems can only sucessfully focus on two of the three capabilities (Consistency, Availability, and Partitioning tolerance).  If you aren’t familiar with CAP Theory, please check out yesterday’s post and the resources at the bottom of that page.

First some background – Below are the two phases deployed today in >90% of Enterprises.  See “Where most Enterprise IT Architectures are today” for in-depth discussion on Phase 1 and Phase 2.

 

What follows is the theory:

Architecture Phase 3 – Cloud & The Real Time Web Explosion

The modern web has taken hold and Hyperscale applications have pushed a change in Architecture away from the monolithic-esque 3-Tier that traditional Enterprises still employ to that of a loosely coupled Services Oriented queued/grid like asynchronous design. This change became possible because developers and architects decided that a Rigidly Consistent and Highly Available system wasn’t necessarily required to hold all data used by their applications.

This was brought to the forefront by Amazon when it introduced its Dynamo paper in 2007 where Werner Vogels presented the fact that all of Amazon is not Rigidly Consistent but follows a mix of an Eventual Consistency model on some systems and a Rigidly Consistent model on others. Finally, the need for a system that operates at as close to 100% uptime and consistency, depending on a single system, was broken. Since then, we have found that all Hyperscale services follow this model: examples include Facebook, Twitter, Zynga, Amazon AWS, and Google.

Deeper into Phase 3
Why is Phase 3 so different than Phase 1 and 2?
Phase 3 Architecture not only stays up when systems fail, it assumes that systems can and will fail! In Hyperscale Architectures, only specific cases require RDBMS with Rigid Consistency, the rest of the time an Eventually Consistent model is fine. This move away from requiring systems to be up 100% of the time and maintaining Rigid Consistency (ACID compliance for all you DBAs) lowers not only the complexity of the applications and their architectures, but the cost of the hardware they are being implemented on.

Moore’s Law is pushing Phase 3
Until around 7 years ago, CPUs were constantly increasing in clock speed to keep up with Moore’s Law. Chip makers changed their strategy to keep up with Moore’s Law by changing from an increase in clock speed to increasing the numbers of cores operating at the same clock speed. There are only two ways to take advantage of this increase in cores approach, the first is to use Virtualization to slice up CPU cores and allocate either a small number of cores or even partial cores to an application. The second is to write software that is able to asynchronously use and take advantage of all of the cores available. Currently most enterprise software (Phase 2) is not capable of leveraging CPU resources in the second way described, however many systems in Phase 3 can.

How do IT processes fit into this?
Today most Enterprise IT shops have designed all of their processes, tools, scripts, etc. around supporting the applications and architectures mentioned in Phase 1 and Phase 2, NOT Phase 3. Phase 3 requires an entirely different set of processes, tools, etc. because it operates on an entirely different set of assumptions! How many Enterprise IT shops operate with assumptions such as:
Replicating Data 3 or more times in Real Time
Expecting that 2 or more servers (nodes) in a system can be down (this is acceptable)
Expecting that an entire Rack or Site can be down
(These are just a few examples)

Why will Enterprises be compelled to move to Phase 3 (eventually)?
The short answer is cost savings and operational efficiency, the longer answer is around the reduction in systems complexity. The more complex the hardware and software stack is (think of this as the number of moving parts) the higher the likelihood of a failure. Also, the more complicated the stack of hardware and software become, the more difficult it is to manage and maintain. This leads to lower efficiencies and higher costs, at the same time making the environment more brittle to changes (which is why we need ITILv3 in Enterprises today). To gain flexibility and elasticity you have to shed complex systems with hundreds of interdependencies and an operational assumption of all components being up as close to 100% as possible.

Conclusion:

Different systems have different requirements, most systems do not in fact need an ACID compliant consistency model (even thought today most have been developed around one.  Some specific cases need ACID properties maintained and a level of 100% Consistency, but these are in the minority (whether in the Cloud or in the Enterprise).  This is causing a split in data models between the current CA (Consistency and Availability – “Traditional Enterprise Applications) and AP (Availability and Partitioning tolerance – many Cloud Applications).  A combination of these is the long term answer for the Enterprise and will be what IT must learn to support (both philosophically and operationally).

Follow me on Twitter if you want to discuss @mccrory

Is it me or is it EngineYard ? Updated – (The Answer is BOTH)

30 Oct

UPDATED:

So after doing some additional reading at the suggestion of @drnic, I read what @tmornini said about the changes occurring.  What is the stated view of events?  EngineYard is growing up.  I can believe this as a plausible explanation, I state this because I have personally gone through this type of change at several of the startups that I have founded (specifically @Surgient & @Hyper9).  Companies change and mature (at least hopefully they do).  As these changes happen, the companies aren’t always the right fit for the same group of employees that were there in the very beginning.

This is actually a very frequent occurrence at successful companies, one of which that comes to mind is VMware.  I watched from the outside as a customer and partner, VMware went from a <100 person company when I first worked with them to an independent subsidiary of EMC with >100K employees.  Now VMware has multiple companies that are subsidiaries of it!

I still am left wondering what EngineYard’s real plan is, although I did see a mention that the customers leaving might have been on an old hardware platform as well.  I will post more if/when I find out more.  EngineYard looks like it is in an exciting phase and a great place to work (great momentum).

ORIGINAL POST:

Am I the only one that finds odd things afoot at EngineYard? Since August @ezmobius, @wycats and @carllerche have all left EngineYard (under friendly terms from what I can tell). Now High Profile EY customers are deciding to leave on friendly terms? In the past week I have seen two examples of this;  the first was with New Relic deciding to change to their own model + a different provider (yet said they had no issues with EngineYard)

and then Pivotal Labs app Pivotal Tracker is leaving under similar circumstances (IMHO).

In my mind, NONE of this would be odd, except that it has all happened in the window from Sept. 1st through Nov. 1st. and both companies have chosen early November for their move (one heck of a coincidence). What is going on over at EngineYard? BTW, in case any of you out there are wondering, I’m not an EngineYard customer, but I may be in the future.   I am actually impressed with what they have built and I’m a fan of both Ruby and JRuby.  :)

Follow

Get every new post delivered to your Inbox.