AWS Migration Considerations: Financial Considerations

AWS Migration Considerations: Part 6 (8 part series) Posted 03 May 2021

Welcome to the sixth article in our AWS Migration Considerations Series. You can find the start of the series here.

One of the biggest driving factors that many look to in a cloud migration is a cost saving. And many take a Business Case that must meet a financial benefit. However, the effort to do the migration is often an overhead that counterweighs the benefit over a 3- or 5-year period.

The On-Premises Cost Complexity

The existing co-lo/data centre often has a set of costs that are never truly accounted for nor attributed to each workload. For example, a typical datacentre deployment will have routing and switch gear that is deployed to connect the hardware together; the annual maintenance fees on this hardware are often a hidden cost. A SAN that gets replaced after 5 years is the cost of standing still – a maintenance overhead that does not progress your organisation forward, but the lack of that maintenance is a risk.

The on-premises costs generally trend up over time, for the same product. Inflation gets applied (and accepted) against many of these costs.

There are also generational changes that happen. Tape drives get replaced with newer ones, which write a different format of tape, but cannot read back more than one generation of tape ago. If these generational changes are more than every 7 years, then we have multiple tape drives needing active operation to cater for potential restores from old backup media.

The Double-Bubble costs

Any migration from the old infrastructure or service to the new, such as a SAN or server migration, is never a zero-sum cost. These refresh projects involve project teams and data centre deployments that all take dollars to drive before any benefit is realised.

The overlap in expenditure is often called the double-bubble: you are paying for the new infrastructure while not actually using it yet. That overlap could be from days to months in time. The point of go-live swap-over may be downtime, a redeployment or massive data copying effort, before going live, and costs for the existing infrastructure remains while it consumes power, maintenance effort, and rack space.

Often the prompt for a replacement is purely time; the support fees from a vendor often increase as the age of the equipment increases, and after the 5th year this is often more expensive than a replacement of the associated hardware.

If we look at a fictitious total cost of operation of a SAN, a replacement equivalent, and a project team and other costs to implement a swap over, then over time the costs look like the graph below.

Modis Australia | AWS Migration Considerations: Financial Considerations | Graph showing example Cloud migration costs

The project team started a period before the hardware costs, planning, selecting, and ordering equipment. Then the new SAN costs kick in, even though it is not live: it is delivered, racked & stacked, powered, burn in, initial maintenance, failure testing, etc. Next comes up some period of data migration, in one of several patterns. And lastly, the project team then starts to decommission, un-rack and dispose of the old SAN. The duration of the double-bubble, and the magnitude may vary, but all those costs have basically got you back to where you started from with no significant competitor advantage for the spend.

If we were using Cloud based services, then the cost would only be incurred immediately as we start consuming services, not all up-front.

Modis Australia | AWS Migration Considerations: Financial Considerations | Graph showing example SAN migration costs

We also see cloud costs are more closely matched to the workload’s requirements. SANs are often replaced when at 75% or 85% capacity (given the lead time and growth rate of storage consumption); with a typical run time utilisation of around 40% - 50% as a target utilisation. Cloud however is very happy to run hot at 85% utilisation, as the time to provision more is typically instantaneous and in some cases, automated (see RDS Auto Scaling storage, new in 2019). And given the cloud manages the physical layer, you are not having to replace major operational components at the same time that affect or provide service to your entire estate: its fine-grained. With no large replacement, there is no large project team to do this; these activities become a Business As Usual (BAU) activity for an operations team.

Costs over time

On premise we see that costs trend upwards over time. Power, co-lo space, and wages all take inflation into account. Yet many costs in cloud have trended downwards: bandwidth, storage, compute power.

In 2013, the cost of bandwidth from some of the largest Telco’s in Australia was in the order of dollars per gigabyte transferred. AWS launched in Sydney in 2013 at US$0.19c/GB at the top tier for data egress, and today (January 2021) that top price sits at US$0.114c/GB, a reduction of 40%. The price drops happened automatically and immediately for all customers, without customers needing to take any action; the cheaper price just came into effect. No migration project, no renumbering of IP addresses, just cheaper.

The same is true for object storage in S3: costs decreasing over time with no customer action required.

Similar is seen on compute, but with some minor customer effort. Not only does the current cost of compute sometimes reduce over time, but newer compute instance types are reduced with a lower raw cost, but also increased performance – with the caveat that the customer may need to look at updating the virtual machine operating system and migrate to the newer instance family over time.

This is also true with block storage: in 2020, the GP3 SSD based block storage is 20% cheaper than the existing GP2 SSD – but again customer action required to adopt the new block storage type.

Cloud Cost Non-Tangibles

One of the hardest costs to account for is the efficiency improvement in using Cloud, if done right with well-trained people, and a supporting company culture. Here is where the history of the past, of large capital investment projects gets in conflict with business agility and tiny changes being implemented by teams.

Over estimating requirements was a natural phenomenon in on-premises deployments. The amount of SAN storage purchased now had to least the next 3 years, so you would start to pay for it all up front on day 1.

The amount of compute deployed would have to meet your busiest day, and any surprises that may happen beyond that. And while overegging an estimate to be 20% utilised means there plenty of head room, it also means 80% wastage and overspend.

Key ingredients in this pattern were long lead times to correct an under provisioning, and the procedure of cost review and sign off required within organisations.

With modern DevOps teams taking full lifecycle responsibility for the data, we see a growing trend in also giving those teams the cost control and mechanisms to real-time provision resources, within limits. Those teams better understand the required infrastructure more than a procurement team, and with some intelligent KPIs can enthuse a team to ensure that cost effectiveness is a continual focus.

Managed versus Self-Managed Components

One such consideration is the mix of self-managed versus managed solutions. For example, a database. The Managed Database offerings from AWS do come with a slight cost uplift, even for the open source based Free databases, for the same given compute and storage.

However, the ability to uniformly administer a larger fleet with less effort, reduces the administrator (staff) overhead costs. The reduction in configuring and monitoring replication and snapshots is often worth the slight cost increase alone.

This pattern is repeated throughout a deployment in cloud: the more managed services you can leverage, the simpler your problem of maintaining a workload is. It is worth paying attention to the residual management that does still exist. These solutions do not replace 100% of what your operations teams currently do, but they do make significant indents.

Staffing Ratios

Looking at staffing overheads, it worth considering the team structures in the organisation: A DBA who can administer 5 databases of a particular type (e.g., all Oracle), coupled with a Systems Administrator who will configure the server fleet, a storage administrator who will provide the SAN storage, and a Network Administrator who will mesh it all together, can often be replaced by a single DevOps Engineer who can administer many times that number of RDS databases.

With that one person able to self-service reliably, and their deep understanding of the workload, they should be empowered to use all means at their disposal to ensure the workload functions optimally.


Modis has been an AWS Consulting Partner since 2013. You can learn more about our AWS Practice and services here.

Find out how Modis can provide you with innovative AWS cloud based solutions and servicesModis has been an AWS Advanced Tier Partner since 2014. Modis' AWS Cloud Consulting services encompasses fundamentals of cyber security, fault tolerant digital system architecture, modernisation, traditional virtual machine or through to modern Serverless approaches, commercial off-the-shelf software operation to bespoke software development, delivered with high throughput, repeatable DevOps approaches to operations. With over half a decade of running critical authoritative government data sets that affects the lives of millions of citizens and the economies of the state, Modis has one of the most mature, experienced and recognised consulting service providers in the world. More importantly, we like to work very closely with our customers, not providing something to purchase, but taking a deep understanding of their business, and providing the recommendations and implementations to ensure a modern, efficient, reliable and secure environment for digital business systems.Contact us
We operate around the world. Would you like to find out more about your local office?Find out about Modis