Rackspace ends unmanaged cloud service

Rackspace has thrown in the towel on unmanaged cloud services. Feeling pressure from the big boys (Amazon, Google and Microsoft) and the much smaller but VC funded VPS providers (Digital Ocean and Linode to name but two) they have hunkered down on Managed-Only services for new accounts.

With the VPS providers steadily dropping price/memory to $10/GB/Month and Amazon also inexorably driving prices down, Rackspace has been squeezed in the middle and is clearly looking to differentiate around it’s ‘fanatical support’ mantra—starting at an additional $50/month for a $20 server. Whether this will work is an open question. Low cost users are typically not looking for support, just a low price, and larger organizations using AWS et al either do it themselves or use a cloud management provider such as RightScale.

OpEx versus CapEx: the issues

Writing in CIO magazine, Bernard Golden outlines some of the concepts that need to be understood when performing an OpEx versus CapEx calculation for IT infrastructure. For example, no-commitment OpEx (such as the classic Amazon AWS pricing model) should always cost more per service hour given that there is a cost to a no-commitment relationship1 that must be borne by the service provider. He uses the car-rental business as an example—which may not be the best analogy, but it makes the point.

Another question is that of utilization. Forrester analyst James Staten coined the term “Down and Off“, an idea somewhat analagous to switching off the lights in an empty room. Prior to the cloud, the argument goes, “Down and Off” is a) too hard to do, and b) there is little economic imperative to overcome the challenges to implementing it as the cost of computing is wrapped up in CapEx that has already been accounted for.

The difficulty in making use of Down and Off is what economists call “friction”, and one of the benefits of a highly automated cloud computing model is the elimination of barriers to reducing unwanted operational overhead.

As such costs change in response to technical innovation, Golden points out that

… input assumptions to financial analyses will change as IT organizations begin to re-evaluate application resource consumption models. Many application designs will move toward a continuous operation of a certain base level of resource, with additional resources added and subtracted in response to changing usage. The end result will be that the tipping point calculation is likely to shift toward an asset operation model rather than an asset ownership one.


1. Both Amazon and IBM, amongst others, offer reduced hourly rates for customers that sign-up for a fixed-length commitment period.

Amazon and ‘Enterprise-class’ computing

The recent Amazon outage has created some heated discussion as to whether Amazon’s services are enterprise ready or not. Much of the discussion seems to miss the point. For example, saying that Amazon is not enterprise-class is like saying an IBM x-server is not enterprise-class. Not very helpful and not very meaningful.

Amazon is a provider of compute and storage, like the aforementioned server. Give that server RAID direct-attached-storage, or dual-homing to a SAN, power from two UPSs and a mirror image of itself in another data center and you can perform synchronization between the two. Lo and behold, enterprise-class computing!

This can all be achieved with Amazon using different ‘Availability Zones’ in more than one Region and the appropriate software. And of course there is an associated price.

The reality is that the majority of Amazon’s clients are startups (many in the social networking space) that are willing to take the risk (or don’t comprehend it) in return for scalability, agility and above all the right price. Another significant group of clients are enterprises in search of cheap, agile compute for problems requiring mass horizontal scalability, but not persistence.

The really fascinating question behind this outage is the economic one, i.e. what level of risk/cost ratio are companies willing to tolerate for Information Technology.

Countless small enterprises that make heavy use of IT don’t have diesel backup and rely on their electrical utility to provide adequate uptime… sans SLA I might add. This is exactly the calculation that anyone using Amazon and its ilk is making–whether they are aware of it or not.

The cloud is all about economics—as are public electrical utilities—and we are in an important phase in the ongoing maturation of Information Technology: a field who’s economics have long been cloudy (pun intended) to say the least.

Microsoft is serious about the Cloud

A credible follow-on to Steve Ballmer’s bubbly “we’re all in” speech at the University of Washington in March has finally landed in the form of a very comprehensive white paper from Microsoft’s Corporate Strategy Group.

In typical Microsoft fashion, there’s no reference to the established competition; the implicit assumption being that the Cloud is all Microsoft’s to take — a replay, perhaps, of the PC and Server revolution 25 years ago. Given the inexorable decimation of the Midrange system and Unix server market by Windows over the last 15 years, not to mention the stranglehold on the corporate desktop, Cloud’s current market leaders would do well to revisit the lessons of the past in the hope that they can be avoided in future.

For a comprehensive treatment, see Cloud Computing Journal’s assessment of Microsoft’s White Paper. Not only is this an analysis of Microsoft’s take on the true value proposition of Cloud Computing but it also reveals more than ever before the implications for Microsoft’s existing businesses and their presumed future strategy.