Traditional Fixed Cost Computing Model

Traditionally computing capacity planning of organizations was based on a fixed cost model.

Organizations used to set up their computing infrastructure themselves or outsource it, but

the investment for computing resources was fixed. In that approach, there were possibilities of

occurrence of two different problematic scenarios. First one due to over-provisioning, and the

second due to under-provisioning of resources (Figure 10.3).

In the traditional approach, the system architects used to estimate an average possible

future requirement of resources for a reasonably longer period of time (generally a few years)

and accordingly acquire resources. During the low demand of application, requirement of

resources also remains low. In such a condition, the application can run smoothly, but the

majority of the resources sit idle most of the time. This results in wastage of resources as well

as investment. This inefficiency is illustrated in the Figure 10.3 by the region marked as ‘A’

in between the actual demand line and the available capacity line. This turns out to be great

wastage due to the presence of a large volume of unutilized resource capacity.

During the high application demand resource requirement also increases, and it often

goes beyond the limit of actual resource capacity. In such a scenario application performance

degrades and business suffers. Hence, this makes it costly during the traffic spikes because

suffered traffic typically means lost revenue opportunities. This scenario is illustrated in

Figure 10.3 by the region marked as ‘B’ under the actual demand line. There the available

infrastructure capacity falls below the line of actual demand. As a result, the service of

application remains unavailable.

Traditional fixed cost capacity model suffers from the problem of over-provisioning and under

provisioning of resources. Both of these are costly for any business.

 


 
Template Design © VibeThemes. All rights reserved.