I.

Introduction to cloud computing

When you hear about the cloud, cloud computing or cloud storage, rest assured: nothing is happening in thin air!

Simply put, when something is in the cloud, it means it is stored on servers accessed via the internet instead of "on premise". Cloud hosting offers on-demand computing resources – everything from applications, software, databases – running on those servers, very often located in data centres.

A good way to understand the cloud is by mentioning five essential characteristics:

  • On-demand self-service: Users can access cloud resources (such as processing power, storage and network) when needed via an interface without requiring human interaction with the service provider.

  • Broad network access: Those cloud computing resources can be accessed via the network through standard mechanisms and platforms such as mobile phones, tablets, laptops and workstations.

  • Resource pooling: It’s the key to scale and cost-efficient services. Computing resources are dynamically assigned and reassigned according to demand and serve multiple consumers.

  • Rapid elasticity: Users can access more resources when they need them and scale back when they don’t, because resources are elastically provisioned and released.

  • Measured service: Users only pay for what they use or reserve as they go – if you don’t use it, you don’t pay. Resource usage is monitored, measured, and reported transparently based on utilisation, like any other utility.

Cloud computing is technology "as a service”: you use remote resources on demand over the open internet, you can scale up and back based on your demands, and pay for what you use. This revolutionary model has changed how we access computing services, helping organisations become more agile in responding to changes in the market.

The evolution of cloud computing

Cloud computing resulted from the evolution of storage technology. The name cloud computing was inspired by the cloud symbol that's often used to represent the internet in flowcharts and diagrams.

With cloud computing, the location of the service provider, the hardware and the underlying operating system is usually irrelevant to the user.

We started hearing about the cloud in the early 2000s, but the notion of "computing as a service" has been around since the 1950s when Unisys (UNIVAC computer), International Business Machines Corporation (IBM) and other companies started producing huge and expensive computers with a high-volume processing power for major corporations and government research laboratories. Due to the cost of buying, operating and maintaining mainframes, organisations and institutions couldn’t afford to have a mainframe for each user, and resource pooling or "time sharing" started taking shape.

Companies rented time on a mainframe. In 1959 the IBM 1401 computer rented for $8,000 per month (early IBM machines were almost always leased rather than sold), and in 1964 the largest IBM S/360 computer cost several million dollars.

Using "dumb terminals", whose only purpose was to enable access to the mainframes, multiple users could access the same data storage layer and CPU power from any terminal.

Virtual machines

In the 1970s, IBM released an operating system that allowed multiple virtual systems, or "virtual machines” (VMs), to be set up on a mainframe, taking the shared access of a mainframe/computer to the next level. Multiple distinct computing environments could function in the same physical hardware.

Example

To explain what a virtual machine is, we will provide an example you might have come across: running incompatible software. A few years ago, before the explosion of apps, software designed for Windows, such as Microsoft Office, wouldn’t work on a Mac. If you wanted to use those applications on a Mac, you could use Apple Bootcamp to dedicate part of your hard drive to installing Windows in order to run Office. You would effectively have two computers in one, one running Windows and one running Mac OS. However, only one operating system could be running at a time, so you needed to restart your Mac to switch between macOS and Windows. There is also software that allows you to run both OSs in parallel without rebooting.

With virtual machines, it’s like running multiple computers within a computer. Each virtual machine could be set up with the required operating system by each user (such as Microsoft Windows, macOS, Linux) and operate as if they had their own resources (memory, CPU, hard drives and networking) – even though these were shared. This is called virtualisation, and it has been a tremendous catalyst for many important evolutions in communications and computing. It is also the main enabling technology for cloud computing.

The most important aspect is that what happens in one virtual machine doesn’t affect the others. For example, if one virtual machine gets compromised (infected by a virus or malware), the others are not affected.

Virtualising servers

Until very recently, physical hardware was quite expensive. With the internet becoming more accessible and the costs of hardware going down, more users could afford to purchase their own dedicated servers. To make hardware costs more viable, servers were also virtualised using the same types of functionality provided by the virtual machine operating system explained above.

The problem now was different: one server might be good for basic functions, but not enough to provide the required resources for a particularly demanding task. This situation pushed a shift from splitting up expensive servers to share them, to combining affordable servers, and that’s how "cloud virtualisation” was born.

Note

A piece of software called a hypervisor installed across various physical servers allows the resources to be visualised and assigned as if they were in a single physical server. Technologists used terms like "utility computing” and "cloud computing” when referring to this visualisation. In these cloud computing environments, growing "the cloud” was simple. Technologists could add a new physical server to the computing room and configure it to become part of the system.

As technologies and hypervisors developed, companies wanted to make the cloud’s benefits available to users who didn’t have their own resources. It is easy for users to book and cancel cloud services, and it’s also simple for companies providing cloud services to fulfil these orders: it’s almost instantaneous.

The pay-per-use model enables institutions, companies and individual developers to pay for the computing resources "as and when they use them, just like units of electricity”. This is the key to modern-day cloud computing.

Next section
II. How is the cloud built and how does it work?