Most people know why they want to move to the cloud, or at the very least, why they want to start taking steps that way – so this blog won’t be discussing that subject – maybe one for the future.
The larger question for most organisations is, what will it really cost us? Everyone has heard the horrific stories of bill shock that some organisations have experienced, and that have forced them backwards – that’s a real shame as one poor experience leads them to missing out on all of the great benefits that they should have been enjoying.
The way we “sell” our cloud offerings at vBridge is quite different to how the Hyperscalers such as Azure, Amazon and Google sell theirs. I am certainly not going to tell you that we are right, and they are wrong – you only need to look at their scale and revenues to see that’s an argument that cannot be made, and frankly, the Hyperscalers have great products.
What we often see when working with customers is that the lowest price does not always equal the lowest cost. There is a plethora of cost considerations when looking at a move to the cloud that need to be factored in.
Let’s take a look at some of these.
Migration of services
This is all about how I get to the cloud from where we are today, as an industry we often talk about the “Six R’s” when considering a move – google these and you will find a heap of detailed information for each one – but the below is a summary:
Rehost – Your classic “lift and shift” operation – move the servers from on premises to IaaS.
Rearchitect – Pull your applications apart, redesign them for cloud native environments.
Refactor – Similar to rearchitect, but more about carving the application up to replace components to make use of cloud native functions such as PaaS.
Rebuild – Take your application functions and rebuild entirely on a cloud platform such as Azure.
Replace – Classic example would be, replacing MS Exchange with SaaS based Exchange Online.
Retire – Make the call to remove the application(s) from operation entirely.
Fundamentally this is a “time to value” discussion. Any way you shake it, a rehost option will always be your most cost effective for your initial migration needs. At vBridge for example, we run a fixed cost “per server” migration. This is fast, effective and the least risky of the alternative migration to cloud options.
The Economics of compatibility
Here in New Zealand, it’s a fair generalization to say that “most” organizations are now well entrenched in virtualization, and that VMware is the most common hypervisor in use across corporates.
Rehosting your servers from an on-premises VMware (or indeed Hyper-V) based platform to an IaaS solution built upon VMware is a low-risk option – we can pretty much guarantee that if its working on your platform today, it will work on ours – only it will perform better.
A lack of compatibility challenges equals low cost, low risk and simple effective migrations that just work. The time and cost investment up front is significantly reduced when you are running compatible systems. Testing is easy, as are proof of concepts.
Stepping out of this construct will add significant cost to migration planning and execution.
Pricing Models – aka Sausage Economics
Sausage economics you ask. It’s a phrase coined by Hamish Roy one of our founders, but it sums up some of the Hyperscalers pricing models.
In simple terms it goes like this. You are having a BBQ and invited some friends. There are 5 of you in total, you want to buy one sausage each, you know that 2 of you like pork and the other 3 like beef – but at the supermarket you can only buy packets of 6 or 8 sausages and they are all the same kind. So, you end up having buy more than what you need, and some people have to accept a less than desirable flavour - that is how “Instance Based Computing” works.
What is instance-based computing? It’s the model that Hyperscalers use to sell their services, and I’m not here to tell you that its wrong – just how it works, and importantly how it compares to a model such as ours at vBridge. Sometimes an example is easiest.
Today you have a Server that is specified as follows that you want to move to the cloud:
- Windows Server with SQL Standard.
- 12 CPU
- 24 GB RAM
- 32 GB OS Drive (General Purpose)
- 500 GB SQL Data Drive (High Performance – 10k+ IOPS)
- 300 GB SQL Logs Drives (High Performance – 10k+ IOPS)
To meet these specifications and performance metrics using a Hyperscaler model, you need to specify your server as follows (this is using an Azure example):
- 16 CPU
- 32 GB RAM
- 32 GB OS Drive
- 8192 GB SQL Data Drive
- 8192 GB SQL Logs Drive
Just to be clear, that’s 4 x CPU, 8GB RAM and a massive 15TB more disk than you actually need.
You are effectively forced to increase the specification beyond that which you need – in simple terms you are paying for resource that you won’t use – and that equates to 47% more cost.
Contrast this with a flexible VM Pricing model where you can specify exactly the specification that you need, plus scale these resources up and down as you need to, as many times as you like. Pay for only what you need – or put it another way, only buy the 5 sausages and specific flavours that you wanted in the first place.
Instance Based Computing
Instance Based Computing goes beyond the “sausage economics” however. With so many instance types, it can be confusing to figure out which instance type is for you.
Taking the example above and looking at the compute needs only (CPU and Memory), there are more than 10 different instance types that, on the face of it, would suit your needs – so what’s the difference between them all?
Price is the obvious one, but what do I get when I pay more for what appears to be the same CPU and Memory resources? Well ultimately, once you look deeper at the “under the hood” documentation for each instance, you start to uncover the differences – and there are many, the following details some of the limitations you may identify against different instance types:
- Limits on network throughput.
- Limits on which disk types can be attached.
- Limits on disk performance (IOPS and Bandwidth).
- Limits on the number of disks which can be attached to a VM.
- Caps on CPU cycles.
- Complex metering which can quickly change the cost effectiveness.
- Not designed for production workloads.
- Not supported for specific applications, for example SQL.
- The underlying CPU & Memory technology.
So, this creates a new challenge of having to REALLY understand in depth how your applications run and how they consume resources today – for the majority of organisations, this is not a level of detail which is understood, or needs to be.
Although you can change the instance being used, its not just a few clicks on your dashboard – it needs to be planned, there will be an outage and there will be cost implications which can very quickly add up and blow your business case out of the water – Bill Shock!
This is again where a simple model such as we provide at vBridge really works. We just deliver you high performance compute infrastructure, and we keep upgrading it. We have a couple of options for disk, and you can mix and match these any way that you want to – we don’t impose limitations.
In the second part of this blog I will look at other factors including:
- Metering Costs
- Network Connectivity
- Backup and DR
- Operational Tools
I would like to sign this blog off with a great saying that comes from our friends at INDE Technology – “Cloud is a way of operating, rather than a location”. I think its poignant, you need to know how you want to run your IT operations and understand the outcomes you are after – then figure out how best to achieve this.
See you in part 2.