Up in the clouds
Wide Area Communications in the UK owns its own servers, which are co-located in two different datacenters. In the US, though, sister company W A Communications instead leases servers from companies such as 1and1. We’ve recently been looking at an alternative, which promises redundancy and rapid scaling, namely Amazon’s EC2 (Elastic Compute Cloud).
EC2 is built around virtualisation technologies such as Xen, and it’s becoming extremely popular with companies that need variable on-demand computing power. Once you’ve registered for the service, you simply create as many server instances as you need – they come in different capacities, relating to the amount of RAM and effective processor power that you’re leasing – and within minutes (typically under two) those instances are up, running, and available to you. Then, once you’ve finished your processing job, just shut down these instances, and Amazon will charge you based only on the amount of time your instances were “live”, plus an extra charge for data transfer to and from those running instances if that data isn’t held on Amazon S3, the company’s own Simple Storage Service; think of this as a set of huge, distributed disk drives.
When it was first set up, EC2 appeared to be extremely well suited for tasks requiring a lot of computing power over a short time, but it didn’t look too great for website hosting, which is a long-term more or less constant load. However, recently Amazon has announced some new features for EC2, which make it a rather more compelling platform for hosting web applications. These new features include static IP addresses and “availability zones”, which are different geographical locations that you can select when you create your server instances, meaning that your application shouldn’t be vulnerable to downtime even if one of the Amazon datacenters was to experience problems. The company has also announced a limited beta of a new feature that will essentially emulate a storage-area network, but with redundancy built in. Once that’s available to everyone, EC2 and its associated services really will become a viable solution for web-application hosting, and indeed for people who just want to try out a Linux box without tying up one of their own machines. The pricing for EC2 is definitely comparable with the cost of running a dedicated server, and the ability to add new instances at a moment’s notice makes it a very compelling option.
W A Communications is about to launch a new web application that we’re hoping will take off and grow rapidly, and that means we have the potential need to be able to scale up quickly to multiple web and database servers. Although we could start from a single, traditional server and deal with the scaling issues if and when they arise, we’ve decided that instead we’ll consider scaling issues right from the start of the project, and look to deploy it on EC2 from the go. That way adding, say, a new web server should take only a matter of minutes, rather than having to wait for at least a day for a new physical server to be put into commission (and having to commit to a one- or two-year leasing contract for that server).
So we signed up for EC2 and dutifully ploughed through its tutorial document. The problem is that in its basic form EC2 is most definitely aimed at people with a lot of command-line savvy, and while that’s pretty much a description of me, sometimes even I hanker for the convenience of a graphical user interface, especially when I’m testing a service and want to start up, shut down and manage several instances at once.