The Future of the Network

22 May

This is part 1 of 5 in a series that will be posted daily this week.

The Future of the Network or “Direct-Connect Switchless Networks”

For the past few months, I have been considering how Networking technology will (and in some cases should) evolve over the next 3–7 years. Networking technology has stagnated from the breakneck pace it once moved. During the 90’s we saw amazing boosts in capabilities, evolving protocols, performance, and overall incredible levels in innovation. Fast forward to 2017 and there has been no real evolution in the network space overall. Sure, there is network virtualization, SD-WAN, and NFV that are useful technologies. These technologies give a lot of flexibility and capability to virtual machines and their hypervisors, along with container technologies such as Docker and Kubernetes. However, the capabilities of the network and its abilities remain largely untapped in my opinion.

My initial thoughts around this began with a conversation I had with @jamesurqhart a few months ago, when he off handedly said that all network switches/routers were really just computers. This is an obvious thing, but a profound insight for me at the time. I never looked at a network switch or router as a computer, I always viewed it as another specialized piece of hardware. The reality is that routers and switches are actually computers and in fact are purpose built computers with a number of ASICs in them. I considered this further and broke things out into modern switching gear becoming more like a commoditized component. This is partially due to large cloud providers such as Amazon, Microsoft, Facebook, and Google all creating their own switches and in some cases using their own ASICs.

Packet Flow on Juniper Networks T Series Core Routers

[Above is a diagram of the Packet Flow on a 
Juniper Networks T Series Core Router] Notice all the ASICs...


Switches have become a commodity today and routers (I mean big routers) are really more like HPC devices. Big iron routers are jammed packed with lots and lots of capabilities and many have large numbers of configurations available depending on the demands of the particular network they are supporting. Much like mainframes with specialized components, these systems are NOT becoming commoditized anytime soon, they are simply too specialized and complex. We have elite cloud companies that do their own ASIC design and can invest in switches in large quantities resulting in large capex savings. These same companies aren’t developing any HPC/core routers as their deployments of those are too few to justify the development expense, at least not currently.

As we move into the future, the trends of networking infrastructure point to continued commoditization. What will and needs to change is the view that switches are purely around to push packets. As you will see in this series of blog posts, there is a huge opportunity for network vendors and others to capitalize on several industry trends to create something beyond a traditional network.

Read part 2 of the series tomorrow to find out the additional trends that are affecting the Future of Networks.

Change is good

6 Aug

As I continue my journey through Cloud Computing, Big Data, Networking, DevOps and more, I have decided to make another change. Today is my first day at Warner Music Group, where I will be assuming the role of SVP of DSP Engineering (aka SVP of Platform Engineering).

There may be speculation as to why I left VMware and Cloud Foundry, I would like to put that to rest here. VMware and Cloud Foundry are both things that I believe in and think they have bright futures. For me personally, I needed a change from a vendor role to that of a customer (which will also keep me in the Cloud Foundry ecosystem). The timing has absolutely nothing to do with Paul Maritz’s departure (which was coincidental).

I will be at VMworld presenting and participating (so feel free to connect with me there – Twitter is best for this).
I expect great things from VMware and especially the Cloud Foundry team over the next couple of years.
Meanwhile, at WMG, I expect to learn and do a great many new and exciting things that I will talk about more over time. I’m looking for a few very talented senior developers/leads to work with me, if you know someone or are interested feel free to send me a resume or ping me on Twitter.

Data Launched

1 Jul

In case anyone missed it, I launched on Tuesday of last week. I intend to continue writing posts on PaaS, IaaS, and Big Data topics and theory here while pursuing Data Gravity and Data Physics oriented efforts at .

More interesting updates soon!

Artificial Data Gravity

20 Feb

Having covered Data Gravity several times on this blog, I thought that it would be time to cover a derivative topic: Artificial Data Gravity.

Recall that Data Gravity is the attractive force created by Data amassing and the needs of Apps and Services to leverage low latency and high bandwidth.

Artificial Data Gravity is the creation of attractive forces through indirect or outside influence.  This could be something such as costs, throttling, specialization, Legislative, Usage, or other forms.  Below I will walk through examples of Public Clouds creating, exerting, and leveraging Artificial Data Gravity.

Costs : The fact that AWS S3 Is free for unlimited Transfer In-bound traffic along with Windows Azure, are great examples of Artificially encouraging Data to amass internally.  By allowing you to put more Data inside of S3 or Azure, this encourages Data Gravity patterns through Artificial means.

Throttling : The Twitter API is a great example with its well known API that allows 350 Calls per/hour.  This makes it nearly impossible to replicate the traffic on twitter without special (and very expensive agreements in place).

Specialization : Specialized services such as DynamoDB not only encourage Data Gravity through transfer pricing, but encourage low writes, high reads based on a 1:5 ratio.  Not only are you unlikely to ever leave DynamoDB, you are also encourage to write code as write efficient as possible due to costs.

Legislative : There are many laws that restrict the location and govern the security and use of Data, these are not technical or physics related, but artificial means of influencing Data Gravity as mentioned in this GigaOM piece covering the law dictating Data Gravity.

Usage : Dropbox charges each individual user for use of Shared Data (Artificial Usage).  This means that each person pays for the Data consuming their storage, however Dropbox is only storing a single copy and pointing all authorized users to that single copy.

There are certainly other forms of Artificial Data Gravity that are not listed in the examples above, if you can think of a concrete example, please comment.

One last note : I’m not saying there is anything particularly wrong with Artificial Data Gravity, however it is something to be aware of as it is one of the behaviors/motivations exhibited from Data Gravity as a whole.

PaaS Element Types

8 Dec

Please Note : This post builds directly on the previous post “A viable PaaS Model

What are PaaS Element Types?

PaaS Element Types are the constructs required to build a PaaS.  Each PaaS Element Type builds upon the previous, I’m not the first to come up with the overall concept of Types building upon one another, this was inspired by Data Types from Software Development.  It is important to understand that all PaaS Element Types end up being abstractions built upon one another.

Why are they important?

As we move to the new paradigm of application development, architecture, and management, applications will be the comprised of these new element types vs. the traditional patterns and designs we have all become used to.  By fully understanding PaaS Element Types, you will gain not only an understanding of how different PaaS solutions work, but what their capabilities, strengths, and weaknesses are and why they have these characteristics.

Compute, Networking, Storage, what about Memory?

You may have noticed that Memory is not listed as a column, why is that?  Storage is meant to encompass all forms of storage, which includes Memory (Memory is just a very fast form of volatile storage, just as an L1 Cache is even faster than standard memory it is all for storing bits and bytes – albeit briefly).  All of the PaaS Elements have the ability to leverage memory for whatever purpose they may need, but I see no reason to separate memory in the Element Types.

Primitives as defined in the previous blog post

Primitives are the Core Building Blocks of Resources.  What does this mean?  Primitives cannot be reasonably reduced to a more basic/granular bit of functionality.  You may be thinking that an Operating System could most certainly be reduced to a more simplistic bit of functionality and you would be correct if we were talking about us in a traditional Infrastructure, but not PaaS.  Part of the magic of PaaS is the prescriptive nature that PaaS brings, along with the obfuscation in most cases of components such as the Operating System.  In the case of the Operating System, it is also important to recognize that as a Primitive, the OS has not actually been instantiated (it isn’t a running OS).  One final note is that PaaS eliminates nearly all direct ties between code and the OS (There are still limits imposed by the runtimes, etc. which are difficult to avoid – e.g. Mono for .Net support on Linux)

Sophisticates : Composites / Combinations / Extensions of Building Blocks (Primitives)

Sophisticates are built from Primitives, meaning that a Sophisticate cannot exist unless it is backed by a Primitive with some addition, change, or a second PaaS Element Type such as a Primitive or Sophisticate.  A Sophisticate could be built from a combination of another Sophisticate and a Primitive!  Let’s take RDBMS as an example (note that this would just as easily apply to NoSQL solutions, etc).  RDBMS will likely leverage a RuntimeVM, an Operating System, Processes, an Interface, Block Store, Cache, and a File System.  While this may be complex it is hidden by the RDBMS Interfaces, which is part of the beauty of PaaS is exposing this combination as a Service or Extension that can be consumed by an API call or a DB connection.

Definitives : Instantiations (Running) of Primitives and Sophisticates either directly in use or wrapped in Services/APIs creating easily leverage abstractions.

These abstractions allow complex configurations of Primitives and Sophisticates coupled with Application Logic, Dynamic Configuration capabilities, and more.  Definitives are the live and fully implemented abstractions as they consume resources.  This could be a Schema in a RDBMS or a Collection with Documents in it in a Document Store.  Definitives are where all of the specificity occurs in PaaS and ultimately what differentiates applications from each other once they are running.

What can you do with all of this stuff?

If you use the prescriptive side (what a developer writing to a running PaaS offering would do) of PaaS and simply consume it, you can quickly and without a deep understand deploy applications within the limits of the PaaS App Space.  Alternatively you can design your own PaaS with whatever capabilities you require to support the behaviors and design patterns that you need for your App Space.  This is done by using the PaaS Element Types to make choices on the Control Space, which in turn creates the boundaries of the PaaS App Space.

Control Spaces share some characteristics of the App Spaces that they provide, this is because the Control Space operates on the same infrastructure that the App Space operates on.  This is true in the majority of PaaS cases that I have seen to date (it could at least in theory, change).  Control Space components are Definitives built from Primitives and Sophisticates to provide the prescriptive approach to the App Space that makes PaaS an attractive alternative to traditional software builds, configuration, deployments, etc.

The work in mapping Primitives, Sophisticate, and Definitives is not yet finished!

What I have provided is the beginning, not the end.  More work needs to be done in adding additional examples of PaaS Element Types, along with mapping all of the current PaaS offerings to both the PaaS Model and what Element Types are used/comprise each PaaS offering.  In future blog posts with help from different collaborators I hope to accurately map the major PaaSes to the PaaS Model and their PaaS Element Types.

 Comment here on the blog or ping me on Twitter (@mccrory) with your thoughts and ideas.

A viable PaaS Model

7 Dec

What makes a PaaS a PaaS?

I’ve seen many discussions on blogs and twitter around this topic, so much so that many people are tired of talking about it because it always leads to cyclical discussions.  I for one haven’t been satisfied with any of the answers that I have seen.  Some people try to define PaaS with requirements such as it must be on demand, while others say that an API and services need to be exposed.  I disagree with these requirements/constraints on describing PaaS.  I think there is a model should be applied to define what is PaaS, to make this model credible there is a necessity that it can be mapped to most (if not all) current and future offerings defined as PaaS (Which will be a follow-on series to this post).

The PaaS Model

The PaaS Model is made up of two different constructs, which are called Spaces.  These two Spaces serve different purposes, but are composed of the same PaaS Element Types (elements will be explained in detail, in an App Space deep-dive follow-on post).  The Control Space and App Space are show in the diagram below, notice that the App Space is fully wrapped/contained within the Control Space.

Control Space:

The Control Space performs all of the automation, management, provisioning that is required by the PaaS.  Interaction with other lower level components such as an Orchestrator is achieved through API abstractions if/when necessary. The Control Space and its implementation also determine what elements are allowed/exposed to the App Space. Further, the Control Space is responsible for maintaining App Space coherency and dependencies.  While the Control Space is comprised of several separate functions, they may be combined in different manners depending on the specific PaaS implementation.  All of this will also commonly be exposed through one or more API interfaces (this however, isn’t necessarily a requirement in the model presently).

App Space:

The App Space is where end-user/customer applications are deployed, updated, and run.  The App Space is controlled (and commonly coordinated) by the Control Space.  The exposure of PaaS Element types by the Control Space to the App Space is one of the key differentiating factors between different PaaS implementations.  App Space characteristics are controlled by how the Control Space is built/designed along with what PaaS Elements were used to build the Control Space.

  • App Network: The path which applications communicate with app each other and services/resources exposed to apps.  The App Network exposes Network connectivity to the App Space.
  • Executor: The application bootstrapping mechanism for apps being or currently deployed.  The executor provides compute/memory resources to the App Space.
  • Code Processor: Examines code, libraries, and dependencies before sending to the Engine and/or Executor.  This can also be thought of as a code inspector or post/pre-processor.
  • Coordination Network: Where the Control Space components communicate/coordinate with each other.  Think of this as a management network that is in most cases out-of-band from the App Network.  The Coordination and App Networks could be combined, however in a production system this would introduce too large of a security risk.
  • Engine: Coordinates the distribution and provisioning of code, services, and their dependencies (most frequently in the Control Space).  The Engine decides the where and how of what happens in the App Space.  Also, the Engine may be capable of coordinating with an Orchestration Layer or other automation tools outside of the Control Space to provide new/additional resources to either/both the Control Space and the App Space.
  • Monitor: Looks at the state of the App Space and the Control Space, signaling other Control Space components to resolve conflicts.  The component most likely to resolve conflicts would be the Engine or a specialized component designed purely for conflict resolution.
  •  Notes: These components/functions may be grouped differently based on the specific PaaS implementation.  Also, some of the functionality or components could be put into an API, split into sub-components, or even externalized through/in a Client or Client side API.

PaaS Elements:

PaaS Elements are abstractions on top of different layers of resources.  Most PaaS Element abstractions are done through Service based or Service centric abstractions.  This is not the only way of creating/doing an abstraction, but it is one of the most flexible ways.  PaaS Elements are broken up into three primary types which are defined below:

Please note there will be an in-depth post on PaaS Elements following this blog post.

  • Primitives : The Core Building Blocks of Resources
  • Sophisticates : Composites / Combinations / Extensions of Building Blocks (Primitives)
  • Definitives : Instantiations of Primitives and Sophisticates either directly in use or wrapped in Services/APIs

All PaaS Elements have the ability to provide or interact through a pass-through interface or abstraction to any other PaaS Element type all provided by the Control Space.  Elements are combined to create an Application (App) in the App Space.  The most interesting twist to the model is that the Control Space is made up entirely of PaaS Elements as well.  There is a great deal more to be covered on this topic both here on this blog and out in the community.  Please provide feedback and questions in the comments section of this post or live on Twitter (@mccrory)

I would like to give a special thanks to Shlomo Swidler (@shlomoswidler) and Derek Collison (@derekcollison) for providing valuable feedback in several draft iterations of this post.

Defying Data Gravity

2 Apr

How to Defy Data Gravity

Since I have changed companies, I have been incredibly busy as of late and my blog has had the appearance of neglect.  At a minimum I was trying to do a post or two per week.  The tempo will be changing soon to move closer to this….

As a first taste of what will be coming in a couple of weeks I thought I would talk a bit about something I have been thinking a great deal about.

Is it possible to defy Data Gravity?

First a quick review of Data Gravity:

Data Gravity is a theory around which data has mass.  As data (mass) accumulates, it begins to have gravity.  This Data Gravity pulls services and applications closer to the data.  This attraction (gravitational force) is caused by the need for services and applications to have higher bandwidth and/or lower latency access to the data.

Defying Data Gravity, how?

After considering how this might be possible, I believe that the following strategies/approaches could make it feasible to come close to Defying Data Gravity.

All of the bullets below could be leveraged to assist in defying Data Gravity, however they all have both pros and cons.  The strengths of some of the patterns and technologies can be weaknesses of others, which is why they are often combined in highly available and scalable solutions.

All of the patterns below provide an abstraction or transformation of some type to either the data or the network:

  • Load Balancing : Abstracts Clients from Services, Systems, and Networks from each other
  • CDNs : Abstract Data from it’s root source to Network Edges
  • Queueing (Messaging or otherwise) : Abstracts System and Network Latency
  • Proxying : Abstracts Systems from Services (and vice versa)
  • Caching : Abstracts Data Latency
  • Replication : Abstracts Single Source of Data (Multiplies the  Data i.e. Geo-Rep or Clustering)
  • Statelessness : Abstracts Logic from Data Dependencies
  • Sessionless : Abstracts the Client
  • Compression (Data/Indexing/MapReduce) : Abstracts (Reduces) the Data Size
  • Eventual Consistency : Abstracts Transactional Consistency (Reduces chances of running into Speed of Light problems i.e. Locking)

So to make this work, we have to fake the location and presence of the data to make our services and applications appear to have all of the data beneath them locally.  While this isn’t a perfect answer, it does give the ability to move less of the data around and still give reasonable performance.  Using the above patterns allows for the movement of an Application and potentially the services and data it relies on from one place to another – potentially having the effect of Defying Data Gravity.  It is important to realize that the stronger the gravitational pull and the Service Energy around the data, the less effective any of these methods will be.

Why is Defying Data Gravity so hard?

The speed of light is the answer.  You can only shuffle data around so quickly, even using the fastest networks, you are still bound by distance, bandwidth, and latency.  All of these are bound by time, which brings us back to the speed of light.  You can only transfer so much data across the distance of your network, so quickly (in a perfect world, the speed of light becomes the limitation).

The many methods explained here are simply a pathway to portability, but without standard services, platforms, and the like even with the patterns etc. it becomes impossible to move an Application, Service, or Workload outside of the boundaries of its present location.

A Final Note…

There are two ways to truly Defy Data Gravity (neither of which is very practical):

Store all of your Data Locally with each user and make them responsible for their Data

If you want to move, be willing to accept downtime (this could be minutes to months) and simply store off all of your data and ship it somewhere else.  This method would work now matter how large the data set as long as you don’t care about being down.