How a South African put Amazon in the cloud
Cloud Computing is defined on Wikipedia as an IT paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort.
Outside of the IT departments though it is just a buzzword. Anytime you want sound informed just drop in a reference to the cloud.
Wondering why a project did not work? Probably did not use the cloud as a solution. How can the government kick-start the economy? Use the cloud. What should your next marketing campaign focus on? How your product integrates into the cloud!
Buzzwords and hype aside, this is a significant development for global improvements to everything from storage to security and should make cheap and dumb devices smart.
Until the advent of cloud services, any device you bought or built needed to be sufficient for its purpose. A desktop computer needed enough memory, storage space and processing capacity to handle the toughest job it may be tasked with. Initially, that would mean that it was overpowered and expensive while rapidly becoming underpowered, requiring a replacement.
IT budgets tended to be about buying the best stuff you could afford that was then written off in three years. Budgets did not always allow for the equipment needed, so you got slower machines or not enough of them. More often than not; budgets would not allow them to be replaced just because they were written off, so a business had to operate at the speed of its slowest computers.
Cloud computing uses a variety of ways to change that. The most notable is virtualisation. You don’t buy an actual computer, you buy a box that can access a computer that can be configured to have as much or little power as you need and only while you want it. If you need to increase the power you don’t need to worry about getting rid of the old machine. You simply press some buttons and it is done.
This is good point at which to talk about Amazon’s experience. The company puts a premium on efficiency, hold only what they can sell quickly, prices to just cover the costs and every process checked for maximum output. To run the rapidly growing retail operation which needed robust websites to handle high volumes of sales, while still being able to crunch all the data to feed back into recommendations for users and ensure all goods are delivered as fast as possible, Amazon needed some very powerful machines.
Amazon’s IT bill is huge, but they realised that they did not need all of it all of the time and moreover there were many others that would love to also have access to those resources. A team of engineers including South African Chris Pinkham wondered if they could build resources that could be expanded and reduced as needed. Then they would not only be able to use just what they needed, but they could sell the rest of the capacity to others and offset the costs to run it all.
What business would not welcome the opportunity to have one of their most expensive investments paid for by others?
In 2004 work began in Cape Town to create the Elastic Compute Cloud or EC2. It was launched in 2006 and allowed you to create a virtual computer that could be configured for your needs and would only cost you money while you used it. They then added to the services over the following years eventually moving all of their own computing needs to their service. These elements together make up the Amazon Web Services, or AWS, and in December 2017 Amazon launched an improved presence in South Africa to allow more local businesses to move into the cloud.
Pinkham is likely the first South African to connect to the internet in South Africa and set up one of the first internet service providers. He has worked for Microsoft and until recently helped Twitter manage the half a billion tweets that are sent every day. He is currently helping the cloud based financing company Jumo grow the financing sector in African economies.
One of the biggest challenges an online retailer faces is how to scale services on very busy days. Consider a big concert and 60 000 tickets going on sale. The website needs to handle more than 60 000 people trying to access the site and co-ordinate who has bought tickets and how many are left. To have that capacity for only a few days in the year is very expensive. Likewise, a retailer during a sale or Black Friday would struggle to keep its systems running.
Cloud computing allows more servers to come online as the demand increases and will continue to do so until the peak passes reducing servers again as demand falls. You only pay while they are being used and your users have no idea you just survived the digital equivalent of a data tsunami.
Chris Pinkham, one of the creators of modern cloud computing platforms
SInce the Amazon launch, other large web companies have added their versions. Microsoft, Google, IBM and Oracle run some of the biggest.
It is worth noting the history of the term as it could just as easily be called galaxy computing or nebula computing (NASA’s version is called the OpenNebula).
The term goes back to the 60s when IT architects would draw their networks, elements outside of the scope of their design would be drawn as being in a cloud shaped bubble.
IBM were the pioneers of big computing having built some of the first. Their founding CEO Thomas Watson (Watson, the supercomputer, is named after him) is often associated with a quote about what the world demand would be for computers. He is said to have replied that "I think there is a world market for maybe five computers", it is meant to be one of the worst predictions, yet had he said it, it may well be true.
A super powerful computer operated by Amazon, Microsoft, Google, Oracle and IBM.
Five computers to take care of global demand.
This article first appeared on 702 : How a South African put Amazon in the cloud