The drop is always movingYou know that saying about standing on the shoulders of giants? Drupal is standing on a huge pile of midgetsAll content management systems suck, Drupal just happens to suck less.Popular open source software is more secure than unpopular open source software, because insecure software becomes unpopular fast. [That doesn't happen for proprietary software.]Drupal makes sandwiches happen.There is a module for that

Cloud computing: the triple fallacy

Submitted by nk on Sat, 2011-09-03 11:52

I have more than once tried to raise my voice against the overwhelming hype of cloud computing especially when sold to startups. I just found a non-tech article that seems a good excuse to write this post I had in my head for some time. I have read the following reasoning many times:

  1. You need to care about scaling from day one.
  2. There is an easy way to scale.
  3. This way is the cloud.

Each of them are wrong.

First, about scaling. As this Forbes article says startups die beause of premature scaling. By the time a startup finds its real business it will pivot enough times that a complete rewrite is necessary anyways.

Second, there is no easy way to scale. There are some best practices to apply but scaling is a matter of compromises which are unique to each site. For example, it takes a few minutes your ad to appear in Craigslist search after you submitted it. But then again, the site handles a real lot of searches and it does so absurdly fast.

Third, the cloud is not some magic way to scale. It only solves one particular problem really well (namely, deploying more hardware). Whether your application benefits from more hardware is a whole different problem -- and not a particularly easy one to solve. The cloud provider might also have some other services which are typically so-so (here's some SQS critique posted yesterday, just as an example). It also creates its own problems many times over, especially related to I/O: virtualized I/O is totally unpredictable at best and horribly slow other times.

And, it's not just Amazon even though they are the biggest so my examples were about them. Read this thread for experience with another service but also for this comment: "Dedicated is so much cheaper/more powerful than typical cloud machines that you can have a surplus of machines ready for your bidding and still pay less than the cloud +end up with way more processing power."

Note that if you have an extremely spikey workload then the cloud makes sense. This post is about how it does not for most websites. When you need 30000 cores for seven hours then it's a whole different story.

Commenting on this Story is closed.

Submitted by Anonymous on Sat, 2011-09-03 12:30.

Great to see this post. I agree very much on the compromises part.

Submitted by Anonymous on Sat, 2011-09-03 12:38.

It's cathartic to have a rant (bad experience with Amazon?), but as with all services it depends on the provider.

I tried which was very cheap, but the I/O was horribly slow. I've found Rackspace to be impressively fast both in I/O and network, and of course no downtime so far.

Submitted by Anonymous on Sat, 2011-09-03 15:34.

I don't really see this as a rant against the cloud so much as scaling out doesn't necessarily equate to looking solely at a cloud solution. And that having a cloud solution doesn't necessarily equate to scaling problem solved. You should look at the available options and see what best fits.

Submitted by Anonymous on Sat, 2011-09-03 18:19.

Submitted by Rick Vugteveen on Sun, 2011-09-04 13:30.

I think we need to remember that cloud computing really means utility computing, with "utility" defined as the ability to spin up new hardware via an API and be billed hourly for usage. There are players such as SoftLayer and NewServers ( that sell bare metal servers by the hour. Considering the benefits of bare metal hardware I'm quite surprised that Amazon doesn't already offer this service themselves. They'd have an immediate advantage over competitors due to the ecosystem they've created.

Submitted by Anonymous on Mon, 2011-09-05 18:03.

Amazon does. Some types of instances are single tenant instances on bare metal. That still doesn't solve the I/O problem but it's really one server = one vps

Jakub Suchy

Submitted by Anonymous on Sun, 2011-09-04 13:35.

I had to read this post twice, but it is spot on. Many people look towards various technology in general as a magic bullet and think that sprinkling some fairy dust will just "make it scale". The question people really have to start asking is "What is the most effective way to scale my application given it's unique requirements?". The cloud done incorrectly can end up costing more money and yield less scalability. However, when done correctly it can save a lot of money over non-cloud solutions which has been proven many times in the real word. The trick is investing in a company or people who understand the cloud, your software, and the overall vision and direction of the application. The one thing I hope people get out of this article is that the cloud is a tool like any other technology and can help you scale up / down in a cost effective manner if done correctly. However it is not a blanket solution and can be costly if you don't fully understand the problems associated with cloud computing.


Submitted by Anonymous on Sun, 2011-09-04 19:40.

Most companies that are big enough to need scalability can tell you their needs in terms of bandwidth and CPU. They don't have a problem that needs elasticity. So virtualization is not such a great idea because it's all over the place in terms of IO; regardless of the provider. These companies mostly need bare metal servers with a certain amount of bandwidth. Much of the optimization is custom and specifically related to the code running on their site.

Submitted by Anonymous on Mon, 2011-09-05 17:27.

That's a pretty generic statement that is incorrect in many instances. How would you have handled Al Jazeera with an unexpected 2500% traffic increase? Or the United / Continential merger where traffic was off the charts for a week (20M uniques per day) that then dropped off the table? Or the Grammys with a huge spike of activity around the event? Bare metal effectively priced itself out of the equation, and the I/O issues were overcome in those intense instances. Large companies get slash-dotted, e-commerce sites need to scale up during Christmas, media campaigns yield quick bursts of traffic, etc. This happens all the time and is where the cloud cuts costs dramatically if done correctly.

Submitted by Anonymous on Mon, 2011-09-05 17:53.

There are a few different cases and each of those cases don't need a kitchen sink of virtualize everything! associated with it... as that can be expensive.

One is getting unexpected traffic, in which case you need to figure out whether you want to spend (a ton) of money to be able to meet a one time demand? If that's profitable, you may want elasticity in the equation, at its premium with all the problems that virtualization creates. Often it's irrelevant and unnecessary as the interest quickly dies down, and while there is interest it will not affect the bottom line at all.

In case of expected spikes there are other solutions outside of virtualziation such as SoftLayer and NewServers. You could also build the scale into your existing architecture at similar price points, to provide it in house.

You could also apply a hybrid approach that allows you to spin up and offload demand and then spin those down and go back to your normal levels of usage on bare metal. Cloud pricing is excessive compared to other solutions -- and its very profitable for the vendors offering it... and for the consulting touting its benefits! For many websites that pricing matters, for others it may not.

I don't like the hype...

Submitted by Anonymous on Mon, 2011-09-05 20:37.

Grammys spike was served off of physical machines, not the cloud.

Source: at 35:50

Submitted by ikaluse on Mon, 2011-09-05 07:20.


Submitted by Anonymous on Wed, 2011-09-07 19:48.

Cloud-based server solutions have their place. I personally use Rackspace Cloudservers for the following:

  • Cloud-based storage of my daily database backups, as opposed to FTP/downloaded. Much easier to set up, manage, and get to in a moment's notice than traditional off-site backups. Bandwidth is damn fast too, and cheaper than Amazon S3 for what I'm doing.
  • Proof of concept servers. I had to spin up a Win2K8 server to test a configuration on, then had to test it on CentOS 5 and Ubuntu 10.04LTS shortly after. Took 30 minutes total time. Try that on bare-metal boxes. It's great for quick tests and prototypes.
  • Small websites that need to scale for predictable bursts of traffic. I have an event website I'm working on now that is slowly ramping up to full use in November, then will drop off about a month later to basic "maintenance" loads by the admins and users. Without having to mess with a server migration on bare-metal solutions, I scaled it up to more memory + CPU + disk space about three months before the event, and will scale it back to a lower (and cheaper for my client) configuration a month after the event. No contract. No additional migration costs. No brainer.

I'll agree that using the cloud solution simply for it's "2400% scalability" is nonsense as you do wind up doing things like load-balancing, multi-server installations, and site code tweaking for high-traffic sites (or those that are anticipated to do so). However, I know it has some great uses, which appear to be completely overlooked.

Don't always need a nailgun, bandsaw or drill press to build a table either - but damn they are nice to use if available... ;)

Submitted by Anonymous on Thu, 2011-10-27 01:16.

Sites should also probably start with performance enhancements in software, such as Boost and APC.

Most sites can be sped up so much they'll never need more hardware.