Several weeks ago I gave a talk called “Regarding Clouds, Mainframes, and Desktops… and Linux” at LinuxCon in Portland (video, slides). Since then I’ve reprised parts of the talk several times, including a couple of times for -only audiences. I’m going to put up a few blog entries that expand on some of the slides.
What does a cloud computing user want?
I’m somewhat embarrassed because when I first made up this list, this item wasn’t present. It should be here and it should be first. People will use the cloud if they have a good reason to do so and can afford it. At a low level, the “application” could just be “load and run this software on that operating system in a machine with this much memory and that much disk.”
Higher up, though, people will want a reason to use that storage or take advantage of the programming platform, and that will be specific applications. We’re all familiar with the notions of email and calendars in the cloud, but other applications might be analytics or data mining, for example.
Resources: storage, processor, platform
I will want to use the cloud for resources that I don’t have or don’t want to use locally. This might be because I’m going to use a lot of temporary storage or need long-term, off-site storage. Similarly, I might need to run a job that needs a lot of extra processors temporarily, such as what-if modeling, Monte Carlo simulations, protein folding, or analysis of billions of bytes of data.
By platform I mean a programming language plus cloud based services such as product catalog manipulation and billing systems. You use this with new program logic to create applications that live naturally in the cloud. They are not local, because people will run these new applications across the Internet or in a Web browser.
APIs: the more standard the better
An API is an Application Programming Interface and is how you tell a piece of software what it should do for you from outside that software. For example, an online banking service could provide APIs so that software running on your laptop can get you authorized to conduct transactions and then do account transfers.
APIs have been around for a very long time for local software applications, for example allowing one program to tell your spreadsheet to compute something. With web services, Service Oriented Architecture (SOA), and Software as a Service (SaaS), these moved to the network in new ways and often used XML in their expression. Cloud computing inherits this. APIs can be very low level such as making a request for storage or high level such as asking for a list of all customers that meet certain criteria.
Standardization is important for interoperability, my next point.
Interoperability among clouds (may learn of this need later)
In all the excitement of a new technological movement, people can get locked into using a system by one provider.
The cloud you use today may not be the cloud you want or can afford to use later. The information you store in one cloud may need to be extracted and moved to one or more clouds by other providers. You may decide to convert from a public cloud to a private or private-public hybrid. Do you want to rewrite your cloud applications completely to use completely different APIs? Creating and using open standards is one of the best ways of getting interoperability.
It’s tricky to determine when you should standardize formats, protocols, and APIs because you don’t want to set things in stone before you have enough experience with them or have a chance to innovate. Some vendors like to delay standardization so they have a chance to lock-in users to their proprietary ways of doing things. Saying “I’ve standardized on using a particular provider and the proprietary way they do things” is a very bad strategy.
In my opinion, the best way to deal with the standards issue is to aim for early common understandings, maximally re-use existing standards, do a gap analysis to see what’s missing, evolve older standards or create new ones to get what else you need, and have a common industry agreement focused on openness.
Also see: The Open Cloud Manifesto
Reduced capital expense
Running things in the cloud can mean that you need to own fewer hardware servers and maintain them. This reduces your capital expenses and can shift them to operating expenses. It has the secondary effect that having less hardware locally means that you will use less power, generate less heat, and can potentially pay less rent on the smaller datacenter footprint you will need.
A good, workable pricing scheme
Cloud providers and users are still working on the pricing plans that make the most sense to them. A common scheme is to pay a few pennies per hour for the processor, a few for the operating system, a few for the applications, then a few more for the storage you use. Add a few thousand of these pennies together and you have serious money (to paraphrase someone)!
If you are planning to move work to the cloud, compare it with what it would cost you to run it locally. It might be cheaper to keep doing what you are doing or else use cloud technology behind your fire wall. That is, using a public cloud on the Internet may not be your least expensive option.
Before moving to a cloud, examine and analyze your usage patterns. Are applications run non-stop or only occasionally? How much information is moved into and out of cloud storage? If the cloud is more expensive, is it still better because of reduction in capital expenses or an improvement in the quality of service you receive and can pass on to your customers?
Quality of service, including Availability, Reliability, Performance, Security, Privacy
So you think you’ve decided to move to the cloud and you like the cost benefits you see. Make sure you ask yourself:
- Will I be able to run my cloud-based application or access my information whenever I want to do so? Do I even know when that is? Does this affect my pricing plan? Do the scheduled down times for the service coincide with periods when I will need it to be available?
- Is your cloud based on proven technology? Will it crash in the middle of a long-running and important computation? If there is a disk crash at your provider, is your data protected? Does the cloud provider have a recent history of losing customer data? Will you be getting the reliability that you expect?
- Will your applications run fast enough in the cloud? Are you sharing resources so much that your work is getting starved? Is it taking too long to access or update your data? Are you getting the required performance for your money?
- Does the cloud provider have a history of security breaches? Can your provider offer proper isolation of your applications so that rogue processes cannot steal your information or update your system? Is your data secure in storage as well as when it is being queried or manipulated?
- Do your corporate or organizational regulations allow you to use a public cloud for the data you intend? Can employee or customer data be incorrectly shared across clouds? Will overflow from a private cloud to a public cloud violate your privacy policies?
Previously: “Who is the user for cloud computing?”