Cloud Computing Explained

Few terms these days are as misunderstood as the terms ‘cloud’ and ‘cloud computing’. Virtually every IT product or mobile app has the ‘cloud’ moniker slapped across its description. But what does it really mean?

In fact - Cloud computing refers to 2 very different things. In general terms it means:

  1. Having computers in specialised buildings around the world that process and store data.

  2. Technologies that transform traditional server and infrastructure technology into a utility much like water, electricity and natural gas. A utility that scales up and down according to demand.

The first encompasses the original term ‘cloud’ - roughly referring to ‘online’, yet still very much subject to real-world regional legislation. This is something any user or organisation should consider carefully before engaging a cloud service, but it’s not what this article is about. We’ll focus on the 2nd definition instead - cloud computing as a utility, not unlike that of the power or water company. To illustrate the terms cloud and cloud computing we’ll have a look at the traditional way of operating an IT infrastructure first.

Traditional IT infrastructures

The main technologies that led to the development of cloud computing are those of virtualization and containerization, without which cloud computing simply wouldn’t work.

In the old days - prior to virtualization - organisations would either deploy a physical server for every group of functions, or a separate one for each function if security or latency were of primary concern. In order to mitigate disruptions during daily operations, most servers would be deployed in tandem - with one server standing by in case the primary failed for whatever reason.

Needless to say this was both rather expensive and rather wasteful. While environmental concerns were less of an issue back in those days, company expenses certainly weren’t. A configuration like the one mentioned above would require careful planning and budgeting - as well as having engineers on standby to service physical machines as well as activating and de-activating them according to the situation.

When virtualization technologies came along (or rather: were ported from mainframes and supercomputers to regular servers and workstations) things improved considerably. Virtualization allowed computers to partition their resources (CPU, RAM, storage, ..) and hand them off to virtual computers. Essentially allowing a single computer to operate as multiple computers, each with their own resources and operating system. While the technique had been around for a long time on mainframes and even on some types of regular servers, it wasn’t until hardware acceleration and hardware security enforcement features were added to CPU’s that the technology saw mainstream acceptance.

While the technique was useful for running different operating systems on the same computer (and operating systems that didn’t support virtualization very well - such as Windows at the time), it was relatively demanding to do so.

Enter containerization and para-virtualization: a technique in which the virtual operating systems are ‘aware’ that they are virtualized and can thus help accelerate the process. Like virtualization this technique wasn’t exactly new either, but it was new to the mainstream server and workstation markets.

The beginnings of Cloud Computing

Having gained the ability to make a single server count as many as well as opening the doors to wholly new avenues of achieving redundancy and load-balancing, it wasn’t long before hosting providers and infrastructure providers saw an opportunity to capitalise on the concept. They started offering virtual servers as a service - with containerized versions being a little cheaper as they were essentially limited to Linux and fully virtualized versions being more expensive as they could run any operating system the customer wanted. This was eventually renamed to Infrastructure as a Service, or IaaS.

IaaS: Infrastructure as a Service

IaaS is the most basic form of cloud computing - renting virtual servers. Providers take care of the underlying hardware, customers take care of everything else. In other words - little to no automation (at least not when it first came on the scene). Nowadays creating and managing virtual servers can be done by accessing the providers API - but that possibility wasn’t around at the very beginning. This meant (and generally still means) that engineers need to manage servers manually - but costs and resource waste were at least considerably reduced in this new computing paradigm.

IaaS is still very much alive today - with every cloud provider having it on offer. It’s still the best way to build a custom infrastructure - by installing advanced management and containerization tools on top of virtual servers.

These days most IaaS offering will include a managed form of Kubernetes - which greatly simplifies and automates server management, but even then it’s not for the faint-hearted.

Managing servers is not something most people (i.e. programmers) particularly enjoy. Vendors therefore came up with a way to manage the virtual servers automatically by offering specialised platforms - a concept which came to be known as PaaS, or Platform as a Service.

PaaS: Platform as a Service

Platform as a Service was invented mainly as a service tailored to software developers. Using PaaS developers can focus entirely on writing their application code without ever having to think about the underlying infrastructure - the servers - virtual or otherwise. The concept also greatly improves resource management and thus helps to reduce cost; because services are billed per second or minute, instead of having to pay a flat fee subscription.

These platforms offer databases and other useful services that can scale to infinity and beyond - without ever having to worry about anything besides the application code; until the bill is received.

PaaS isn’t all romantic moonlit walks - holding hands with one’s best code either, it can be (and usually is) quite limiting. Users of such platforms are more than a little encouraged to use platform-specific solutions, making it quite hard if not flat-out financially unfeasible to move to another platform later on. In other words - traditional PaaS is more often than not a vendor lock-in minefield.

Thanks to the advent of ‘container orchestration’ (a fancy word for automated server management) such as Kubernetes and the concept of DevOps - many developers instead opt to build on an IaaS solution instead of PaaS (and rightly so).

Like IaaS however, PaaS is still very much alive today - but it’s something that’s mainly used by software developers - not end-users. It requires considerable knowledge to implement anything on a PaaS system, which is something people active in other areas don’t really care to learn about - as they’re generally busy enough doing other things.

Enter SaaS - or Software as a Service: Cloud Computing for regular folks; the end users.

SaaS: Software as a Service

Software as a Service is what most (non-IT) people think of when they think of a cloud service. A software product that is entirely managed by a provider. They will generally offer several tiers or subscription plans to match expected usage.

SaaS offerings number in the hundreds if not thousands and include everything from accounting software to collaboration tools, groupware (email and calendaring) and beyond. Cloud storage providers such as Dropbox, Box, pCloud, Google Drive, Microsoft OneDrive, Baidu Yunpan and the like all fall into this category as well. This is no doubt the reason many people consider cloud computing and cloud storage to be synonymous.

This is quite incorrect however as personal and organisational cloud storage offerings make up for a very small part of the total amount of SaaS offerings.

SaaS can be quite ideal for organisations that don’t have a lot of technical aptitude in-house. No surprise then that pricing schemes can be quite steep and vendor lock-in is all but inevitable. Spending considerable time and effort to adapt the application and the organisation to one another - will leave few in an organisation jumping at the opportunity to migrate.

Some providers therefore offer a self-hosted or self-managed version alongside their SaaS offering. This allows customers more flexibility, but it also puts greater demands on the customer as infrastructure management will become their own responsibility.

On the whole - SaaS is believed to optimise resource management more than PaaS, as vendors can tune their infrastructure to meet customer demand precisely. (The savings aren’t generally passed on to the customer though - it’s more of a convenience meets ’thinking more environmentally friendly’ kind of thing.)

Here ends the story for end-users, but there’s more to be had for developers, managers and enthusiasts, namely: Functions as a Service - or FaaS.

FaaS: Functions as a Service

Functions as a Service or FaaS or ‘Serverless’ computing are the latest entry in the *aaS family of computing paradigms. In a nutshell it means that functions are made available on demand - with the underlying hardware, virtual servers and software all falling under the vendors management responsibility.

Functions can in theory be anything - but are generally considered to be relatively straightforward operations, such as processing a contact form and forwarding it to the appropriate email address, grabbing the latest updates from an online service and other operations along those lines.

Needless to say FaaS has ‘developers only’ written all over it, but it may actually be the closest thing to servers becoming a utility like water or electricity.

The Takeaway

Cloud computing encompasses a range of different services - but modern Infrastructure as a Service or IaaS and Functions as a Service or FaaS resemble our definition of a common utility the most.

Subscribing to a managed Kubernetes cluster (IaaS) - empowered to create, manage and decommission (virtual) servers on demand in order to accommodate its virtual containers - is indeed very close to being a common utility. With every provider offering such a service it has become relatively simple to move between vendors and platforms by transferring the cluster configuration and its resources to another vendor. Kubernetes makes Infrastructure as a Service a relatively vendor agnostic solution given its status as an industry standard.

Considering PaaS or SaaS a common utility on the other hand is essentially saying that electricity and water are vendor specific and subject to conversion and re-engineering before they can be reconnected, operated or consumed elsewhere.

Functions as a Service platforms are perhaps even more akin to a common utility as they don’t (necessarily) prescribe how code should be structured or what it should do. They simply support running code - using the same criteria to give a go-ahead or buzz-off that compilers and interpreters do. FaaS is indeed more like running water than any other cloud computing paradigm before it - although the scope is somewhat limited compared to the other *aas-es. Functions can perform fairly complex operations - but it’s not as though a full blown software application can be considered a ‘function’. There are limitations. Luckily - modern IaaS solutions can make up for that deficiency. Kubernetes for example can be made to include FaaS technology quite easily, meaning IaaS platforms can now offer the best of both worlds: mostly managed and fully managed alike.

One last takeaway - perhaps the most important of all - is that while this article has assumed cloud computing as something that belongs online, it is also entirely possible to do all these things ‘offline’, on company premises. While that might not make a lot of sense for Platform or Software as a Service - it certainly does for Infrastructure and Functions as a Service.

With these technologies organisations can rest assured that no hardware will be switched on unless it actually has a job to do - and that it will switch off after the job is done. Not only that, companies can instead invest in fleets of affordable and simplified servers instead of large and expensive ones. Something to consider for any organisation - whether they wish to save on operational- and investment costs or wish to reduce their carbon footprint. In cloud computing these things are all very much connected.

Yours,

ZoneStruct