Cloud Solution – Cost Optimization and Capacity planning

Building cost aware cloud based solutions is very different than the traditional IT approach. While developing applications targeted for traditional local data center, we use to plan to allocate servers/hardware to meet maximum capacity required by the application. Even if maximum capacity required by the application is only used for few hours/days in a year, even in that case we usually allocate maximum capacity hardware. We use to do that because allocating hardware within traditional data centers was a tedious and time consuming job.

With evolution of cloud, allocation of hardware is just a click away. So applying the traditional IT approach of allocating maximum capacity in cloud is not recommended and will not even be feasible from cost perspective, you will end up paying for idle and redundant resources most of the time, and will make it very-very expensive for you.

0 Cloud consumption

So there is a big difference in traditional IT and Cloud based resource allocation approaches. As per traditional IT approach, you plan for worst case scenario, but in cloud based approach, you plan for immediate requirement only, but plan for on-demand scale up for worst case scenario.

Now the question arises, how do you plan resources and do cost monitoring for cloud based applications. To do that, consider following points,

Right Sizing – be Granular

While planning for capacity, be as much granular as much as you can. Each cloud service comes in different sizes, cost and features. There could be chances that multiple small instances will suffice your requirement, rather than one large instance, or vice versa. But how do you decide? You can decide only by understanding your requirements and by calculating expected load on the system.

Suppose you are designing an IoT solution on Azure platform, and leveraging Azure IoT Hub for message ingress. IoT Hub comes in different flavors (free / S1 / S2 / S3), and each have its own pricing and limitations around number of messages supported per day. So question is which one to use and what would be expected price? For that you will have to do quick calculation to understand your message ingress load. Suppose you have half million devices and each device will send out one message per hour to the cloud. So total messages per day will become 12 million messages.

1 IoT Messages

Now take this total number of messages and calculate how many number of units will be required for each S1, S2 or S3 to support this 12M message load per day.

2 IoT Messages Calculation

Above calculation shows that 2 instance of S2 will easily suffice 12M messages load, and would be the cheapest too. Each cloud service is unique, so identifying right size for each service is first step towards cost optimization.

Make Consumption Cost one of your Architecture’s Quality Attribute

While architecting a solution, we consider usual quality attributes, such as Availability, Scalability, Security, Performance, Usability, etc. These are very well applicable for solutions either designed for cloud or traditional data centers. But for cloud, one more attribute plays a bigger role – cost of consumption. My suggestion is to consider “cost of consumption” as one of the architecture quality attribute and design solution accordingly.

There could be a scenario, where you may prefer to give priority to cost over other attribute such as performance. One such example I personally came across – we were using Azure Data Lake Analytics service for data analysis. Azure Data Lake Analytics is an on-demand analytics job service, where you only pay for the job’s running time and nothing else. In the first design approach, we gave priority to performance attribute and created couple of parallel running jobs, which were actually running on same data source, but doing different analytics. We achieved high performance, but the cost was high. Then we changed the approached and merged these jobs into one, which resulted in a slight latency in performance, but resulted in almost 75% of saving on cost.

Elasticity – Automate Scaling of Resources

Cloud brings the flexibility to allocate resources on demand whenever you need it and also release resources when you don’t require. As discussed earlier, allocate resources only what you need right now, but plan for automation to scale up when the demand increases. Almost all cloud providers provide this elasticity.

There are different ways application can be scaled,

Tie Cost with Business Returns

Depending upon the nature of the application, cost of consumption of the application should justify the business need and returns from that application. E.g. suppose you are hosting an eCommerce application, and have configured auto-scaling for that. Now if user load increases, and application scales up, cost of consumption will also increase. But if the user load is increasing on an eCommerce application, than there should be increase in direct profit also, which should take care of increase in cost of consumption.

Now if the increase in cost of consumption is not proportional to the increase in business, then it’s time to revisit architecture, review each component and its cost consumption in details, i.e. going back to point #1 – be granular.

Continuous Monitoring

Until unless you monitor, you will not even know how much you are consuming, and may get a big surprise at the end of the month. Monitoring doesn’t mean monitoring at application level. Monitoring has to be at granular level, i.e. at each resource level which is getting consumed within an application. Best way is to tag each resource, and monitor its usage on daily basis. Based on the monitoring data, analyze consumption and identify optimization options. After identifying optimization options, reconfigure your deployment, and start monitoring again. So it’s a recursive cycle of Monitoring, Analyze, Reconfigure and monitor again.

3 Monitoring

Purchasing Options

Each cloud provider is getting innovative in pricing and coming up with very lucrative options, such Monthly Commitment (for Azure) or Reserve Capacity (for AWS), along with usual Pay as you Go model. But can you commit for capacity on the very first day? Answer is “No”, you should first start with Pay as you go model, observe or benchmark your application consumption in production/similar to production usage, and identity how much minimum commitment you want to go with. If you can identify your application usage pattern, you can easily leverage commitment offerings from cloud providers and save significant amount.

Architecture/Solution should be portable

Cloud is evolving, new services are getting launched every day or every week. That time is gone, when you use to design solution once and expect it to run for next 10 years uninterrupted. Not only new cloud services, but chances are even for existing cloud services there may be new plans or SLAs been offered. So your architecture need to be agile and flexible to adopt new changes and leverage benefits of new services. Example, if your architecture is designed for microservices patterns, than you can leverage serverless cloud computing like Azure Functions or AWS Lambda.


So in short, cost optimization for cloud based solution starts from the Architecture phase itself, and continue during its life cycle in production usage. It’s not a one-time activity, it’s an on-going activity, where everyone contributes, be it architect, designer, developer, tester, and support engineer.


Software Estimation – Art or Science? – Part 2

In the previous post, we looked into different aspects of software estimation,

  • Types of Estimation
  • Estimation Process
  • Common Pitfalls
  • Role of estimation tool and how it can benefit organization in long run

We will continue from there, and in this post we will look into benefits of capturing organization’s historical data and type of analytics you can do over that historical data, that will help you take informed decisions quickly and with confidence.

First and foremost benefit of capturing historical data – so that you can compare your estimate with your historical data. People will argue that each project is different and so each estimation is different. But that doesn’t mean you can’t compare. Estimation starts with breaking down scenarios into smaller requirements and then estimating it either through top-down or bottom-up approaches. Within estimation, lot of other environmental and technical factors also play a role, and these vary based on company/team structure, organizational capability, and others. So comparing estimates is not just about comparing requirements one on one, but it include lot of other factors too, which can be very unique to your organization/team. You can always categorized projects into different categories, such as based on industry or technology. And you can compare your estimate within that particular category, that will narrow down your comparison and be more specific.

Coming to analytics, it can be of different types, but personally I like following two,

  1. Project Size Vs Duration
  2. Identify Impossible Zone

Project Size and Duration

In this, you compare Duration of the project Vs Size of the project. Unit of size you can standardize based on your organization. It could simply be an Effort, or UCP (Use Case Point) or FP (Function Point) or any other unit based on your estimation methodology. Duration is straight forward, it’s a time span, either in hours or days or weeks or months.

Size Vs Duration is the simplest of the comparison, but it provide lot of information, such as,

  1. Minimum and Maximum range of deviation from the organization productivity, i.e. size vs duration.
  2. How far your current estimates is from your organizational historical productivity.

Compare with Historical Data (Duration Vs Size)

Identify Impossible Zone

If you have been involved in any pre-sales, one of the most common request comes from sales team or customer is to reduce overall duration of the project by compensating it with number of resources. But the question is – how much you can reduce the duration? In project management terms, you first create project plan, identify critical path and that will give you the minimum required duration. But during pre-sales, most probably you will not have that luxury to create detailed project plan and come up with critical path.

So how do you identify impossible zone?
Here also, the historical data and estimation tool can play a big role. Based on historical data, tool can predict impossible zone, may not be 100% accurate initially, but will be good start. And more data you provide to the tool, the more accurate predictions you will get.

Area of Impossible


Software Estimation – Is it Art or Science?

There is no definitive answer to this question, but in actual it’s a combination of both. When art and science are mixed together then only the accurate or realistic estimates are achieved. Let’s go through this in detail in this article.

Before we drill down into art and science pieces, lets quickly go through different type of approaches or methodologies available for software estimations.

Types of Estimations

Top down Estimation

Top down estimation approach is to start from the project vision and goals, break them down in to smaller or granular requirements or modules or features (different names used by different companies/teams). Each of these modules are further broken down into work packets, which are estimated and planned and then assigned to team members for development or execution.

  • Benefits of top-down approach – major tasks are quickly identified, and details are defined at later stages.
  • Common top-down estimation techniques – Use Case Point Estimation, Function Point Estimation

Bottom Up Estimation

Bottom-up estimation approach is to start with defining list of tasks required to achieve customer goal, and then group these tasks together to form modules. This approach is more a team activity, rather than done by a single person. Whole team contributes in defining different tasks and then grouping them together.

  • Benefits of bottom-up approach – results give more detailed schedule, but it’s also a time-consuming approach compared with top-down estimation techniques.
  • Common top-down estimation techniques – Wideband Delphi, WBS (Work Breakdown Structure)
Factors for Comparison Top-down Bottom-up
Granularity of Estimates Low / Medium High
Granularity of Requirements required Low High
Cost of doing estimates / Estimates turnaround time Low High
Level of Assumptions made during estimation Medium Low

What’s the Estimation Process?

Let’s go through estimation process step by step,

Software Estimation-1

Any estimation starts with a set of requirements. Quality of requirements plays a major role in defining the quality of estimates. If requirements are of low quality or at very high level, then there will be more assumptions made while doing estimates and you will see more variations in planning and estimates at the time of execution. More granular the requirements, better the chances of having accurate estimates.

Second stage is estimation, where not just requirements but Influencing factors are also one of the input. Influencing factors like team dynamics, skill set availability, team experience, etc. plays major role in defining estimates. Such as, if current team doesn’t has technical skill to deliver this kind of project, then obviously team will take some time to ramp-up on required skills, that becomes one factor to your estimates.

Another major input is historical data – if you have delivered similar kind of engagements in the past, then you should refer actual execution effort for those past projects, and that will help you quickly estimate for these new requirements. And that’s where the science portion of the estimation comes into play, where you refer historical data and not completely dependent on your subject matter experts (SMEs).

Third stage of estimation process is capturing actual execution time and effort at the time of build/execution phase, and maintaining that actual execution effort as company’s historical data. This is the most important activity and most of the companies don’t prioritize it and misses it completely. And that directly results in their lack of historical data, which ultimately results in long estimation times during pre-sales cycle and team’s lack of confidence on estimates, as they always have to be completely dependent on their SMEs for estimations.

While we are discussing Historical Data, lets also look at what all goes into that historical data after the project execution,

  1. Size of the Project – it’s up to you how you want to define size of the project, it could be in terms of number of requirements categorized along with their complexities, or size in terms of Functional Points, Use Case Points, etc.
  2. Type of Project – this can be defined in terms of domain perspective or technical perspective. Such as, project is categorized as insurance domain or manufacturing domain, etc. Or project type could be more technical in nature, like it’s a website development, or a IoT (Internet of Things) project, etc.
  3. Total Effort – effort is a measure of time, i.e. total time spend by the team in terms of hours or days or months.
  4. Total Cost – total cost of the project, which will not only include team’s effort, but will also include other expenses like travel, software licenses cost, hardware cost, etc.
  5. Duration – depends on what methodology you use, like Waterfall or Agile, always capture duration of each phase/sub-phase of the execution, rather than just capturing total project duration.

Software Estimation-2

Some Common Pitfalls During Estimation

  1. Targets are treated as Estimates – Initially targets should always be kept different from estimates, don’t confuse targets as estimates. After base estimates are ready, then there should be iterative process to bring target, estimates and scope into alignment.
  2. Committing estimates too early in the Cone of Uncertainty – if you don’t have requirement clarity or scope clarity, then it’s a good time go back to stakeholders and get requirements clarified, before you start estimation.

Software Estimation-3

  1. Not using any Estimation Software – having any software estimation tool is a must for any organization. This is the tool that will help you manage historical data, and help your estimators/SMEs to come up with standardized estimates and streamline your organization’s estimation process. Otherwise everyone will create their own estimation templates and it will be complete chaos in the end.
  2. Not including Risks impact in estimates – Identifying risks upfront, and calculating Risk Exposure will provide credibility to your estimates and will also help you during execution.
  3. Creating estimates that assume that no one will go to training, attend meeting, vacation, get sick, etc. – in any business, resource need to be continuously trained, people get sick, people goes on vacation, etc. So in your project plan, you should also compensate these factors otherwise you will face resource crunch during execution time.

Conclusion – So is it Art or Science?

Assumptions are always involved in any of the estimation technique you use. But it’s always better to start your estimation with any of the statistical estimation techniques based out of your historical data, and finally at the end, bring art factor and refine your estimations based on assumptions, risks, and other factors. So, to summarize following is common rule for Top Down and Bottom Up Estimations,

  • If you are estimating on a project which has similar organizational historical data – you can rely primarily on science and apply art as a final step.
  • If you are estimating a project that doesn’t have much organizational history – start with science, not exceeding your point of knowledge and organizational history, and then apply lot of art to come up with accurate estimate.

So estimation is always a combination of both – Art and Science, the key is how well you combine both. As your company’s or team’s maturity increases, you will see transition from art side of estimates to science side of estimates.

Software Estimation-4

Happy Estimating !!


Announcement – HoloLens Blueprint published

8 months of greats team work, burning mid-night oils and numerous weekends have resulted in the successful release of HoloLens Blueprint. Thanks to the team and co-authors of this book Abhijit Jana and Mallikarjuna Rao, along with me in accomplishing this achievement.

Through this book, you can start your mixed-reality journey by understanding different types of digital realities (VR/AR/MR). You will learn to build your first holographic application, also you will understand holographic application integration possibilities within Line of Business (LOB) Applications using Azure. Moving ahead, you will create Integrated Solutions using IoT (Internet of Things) with HoloLens. Gradually you’ll learn how to create and deploy applications on a device. You will learn to publish application to the store, and if you are an enterprise developer, you will also learn to manage and distribute applications for enterprise-enabled or domain-joined HoloLens devices. Finally, you will develop an end-to-end realistic holographic app, ranging from scenario identification to sketching, development, deployment, and, finally, production.

Title Page

HoloLens Blueprint (ISBN-10: 1787281949, ISBN-13: 978-1787281943) is available at Amazon & Publisher’s web site

What you will learn

  • Interact with holograms using different interaction models
  • Develop your first holographic app
  • Integrate holographic applications with cloud systems like Azure
  • Visualize data feeds coming from the cloud (Azure) through holograms
  • Manage the application distribution of enterprise-enabled HoloLens
  • Integrate HoloLens applications with services deployed on Azure
  • Identify and create 3D Assets and Scenes
  • Use HoloLens to explore the Internet of Things (IoT)
  • Use HoloLens to develop your retail business solution

So lets kick start mixed reality journey within HoloLens, and keep sharing your feedback, you can reach out to us at –,,